đ gemma-3-12b-it-FP8-Dynamic
A quantized version of the gemma-3-12b-it model, optimized for efficient inference with vLLM.
đ Quick Start
This model can be deployed efficiently using the vLLM backend. Check the "Deployment" section below for a detailed example.
⨠Features
- Multimodal Input: Supports both vision and text inputs, enabling a wide range of applications.
- Quantization Optimization: Uses FP8 quantization for both weights and activations, reducing memory usage and accelerating inference.
- Efficient Inference: Compatible with vLLM, which provides high-throughput serving and OpenAI-compatible APIs.
đĻ Installation
There is no specific installation step provided for this model. However, you need to have the necessary dependencies installed, such as vLLM
, transformers
, and llm-compressor
. You can install them using pip
:
pip install vllm transformers llmcompressor
đģ Usage Examples
Basic Usage
This model can be deployed efficiently using the vLLM backend, as shown in the example below.
from vllm import LLM, SamplingParams
from vllm.assets.image import ImageAsset
from transformers import AutoProcessor
model_name = "RedHatAI/gemma-3-12b-it-FP8-dynamic"
image = ImageAsset("cherry_blossom").pil_image.convert("RGB")
processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)
chat = [
{"role": "user", "content": [{"type": "image"}, {"type": "text", "text": "What is the content of this image?"}]},
{"role": "assistant", "content": []}
]
prompt = processor.apply_chat_template(chat, add_generation_prompt=True)
llm = LLM(model=model_name, trust_remote_code=True)
inputs = {"prompt": prompt, "multi_modal_data": {"image": [image]}}
outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64))
print("RESPONSE:", outputs[0].outputs[0].text)
vLLM also supports OpenAI-compatible serving. See the documentation for more details.
đ Documentation
Model Overview
- Model Architecture: gemma-3-12b-it
- Input: Vision-Text
- Output: Text
- Model Optimizations:
- Weight quantization: FP8
- Activation quantization: FP8
- Release Date: 2/24/2025
- Version: 1.0
- Model Developers: Neural Magic
This is a quantized version of google/gemma-3-12b-it.
Model Optimizations
This model was obtained by quantizing the weights of google/gemma-3-12b-it to FP8 data type, ready for inference with vLLM >= 0.5.2.
Deployment
This model can be deployed using the vLLM backend. See the "Usage Examples" section for a code example.
Creation
This model was created with llm-compressor by running the code snippet below as part a multimodal announcement blog.
Model Creation Code
import requests
import torch
from PIL import Image
from transformers import AutoProcessor, Gemma3ForConditionalGeneration
from llmcompressor.transformers import oneshot
from llmcompressor.modifiers.quantization import QuantizationModifier
model_id = "google/gemma-3-12b-it"
model = Gemma3ForConditionalGeneration.from_pretrained(
model_id, device_map="auto", torch_dtype="auto"
)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
recipe = [
QuantizationModifier(
targets="Linear",
scheme="FP8_DYNAMIC",
sequential_targets=["Gemma3DecoderLayer"],
ignore=["re:.*lm_head", "re:vision_tower.*", "re:multi_modal_projector.*"],
),
]
SAVE_DIR=f"{model_id.split('/')[1]}-FP8-Dynamic"
oneshot(
model=model,
recipe=recipe,
trust_remote_code_model=True,
output_dir=SAVE_DIR
)
Evaluation
The model was evaluated using lm_evaluation_harness for OpenLLM v1 text benchmark. The evaluations were conducted using the following commands:
Evaluation Commands
OpenLLM v1
lm_eval \
--model vllm \
--model_args pretrained="<model_name>",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=<n>,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True,enforce_eager=True \
--tasks openllm \
--batch_size auto
Accuracy
Category |
Metric |
google/gemma-3-12b-it |
RedHatAI/gemma-3-12b-it-FP8-Dynamic |
Recovery (%) |
OpenLLM V1 |
ARC Challenge |
68.43% |
68.86% |
100.62% |
OpenLLM V1 |
GSM8K |
88.10% |
88.02% |
99.91% |
OpenLLM V1 |
Hellaswag |
83.76% |
83.78% |
100.02% |
OpenLLM V1 |
MMLU |
72.15% |
71.80% |
99.51% |
OpenLLM V1 |
Truthfulqa (mc2) |
58.13% |
59.35% |
102.09% |
OpenLLM V1 |
Winogrande |
79.40% |
79.48% |
100.10% |
OpenLLM V1 |
Average Score |
74.99% |
75.21% |
100.29% |
Vision Evals |
MMMU (val) |
48.78% |
49.00% |
100.45% |
Vision Evals |
ChartQA |
68.08% |
68.88% |
101.18% |
Vision Evals |
Average Score |
58.43% |
58.94% |
100.81% |
đ License
This model is licensed under the Apache 2.0 license.