G

Gemma 3 12b It Qat 4bit

Developed by mlx-community
MLX format model converted from google/gemma-3-12b-it-qat-q4_0-unquantized, supporting image-text generation tasks
Downloads 984
Release Time : 4/15/2025

Model Overview

This is a quantized multimodal model that supports image-text generation tasks and can run efficiently under the MLX framework

Model Features

4-bit quantization
The model undergoes 4-bit quantization, significantly reducing memory usage
MLX compatibility
Optimized for the MLX framework, enabling efficient operation on Apple Silicon devices
Multimodal support
Supports joint processing and generation of images and text

Model Capabilities

Image caption generation
Multilingual text generation
Joint image-text understanding

Use Cases

Content generation
Image captioning
Generate detailed descriptions for uploaded images
Produces text descriptions that accurately reflect image content
Visual question answering
Answer natural language questions about image content
Provides accurate answers related to the image
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase