đ gemma-3-4b-it-quantized.w4a16
This is a quantized version of google/gemma-3-4b-it, optimized for efficient inference.
đ Quick Start
This model can be deployed efficiently using the vLLM backend. For a detailed deployment example, refer to the "Deployment" section below.
⨠Features
- Model Architecture: Based on google/gemma-3-4b-it, supporting Vision-Text input and text output.
- Model Optimizations:
- Weight quantization: INT4
- Activation quantization: FP16
- Release Date: 6/4/2025
- Version: 1.0
- Model Developers: RedHatAI
đĻ Installation
No specific installation steps are provided in the original document.
đģ Usage Examples
Basic Usage
from vllm import LLM, SamplingParams
from vllm.assets.image import ImageAsset
from transformers import AutoProcessor
model_name = "RedHatAI/gemma-3-4b-it-quantized.w4a16"
image = ImageAsset("cherry_blossom").pil_image.convert("RGB")
processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)
chat = [
{"role": "user", "content": [{"type": "image"}, {"type": "text", "text": "What is the content of this image?"}]},
{"role": "assistant", "content": []}
]
prompt = processor.apply_chat_template(chat, add_generation_prompt=True)
llm = LLM(model=model_name, trust_remote_code=True)
inputs = {"prompt": prompt, "multi_modal_data": {"image": [image]}}
outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64))
print("RESPONSE:", outputs[0].outputs[0].text)
Advanced Usage
The advanced usage is mainly reflected in the model creation and evaluation processes. For details, refer to the "Creation" and "Evaluation" sections below.
đ Documentation
Model Overview
- Model Architecture: google/gemma-3-4b-it
- Input: Vision-Text
- Output: Text
- Model Optimizations:
- Weight quantization: INT4
- Activation quantization: FP16
- Release Date: 6/4/2025
- Version: 1.0
- Model Developers: RedHatAI
This model is a quantized version of google/gemma-3-4b-it, obtained by quantizing the weights of the original model to the INT4 data type, ready for inference with vLLM >= 0.8.0.
Deployment
Use with vLLM
This model can be deployed efficiently using the vLLM backend. vLLM also supports OpenAI-compatible serving. See the documentation for more details.
Creation
This model was created with llm-compressor by running the following code snippet:
Model Creation Code
import base64
from io import BytesIO
import torch
from datasets import load_dataset
from transformers import AutoProcessor, Gemma3ForConditionalGeneration
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.transformers import oneshot
model_id = "google/gemma-3-4b-it"
model = Gemma3ForConditionalGeneration.from_pretrained(
model_id,
device_map="auto",
torch_dtype="auto",
)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
DATASET_ID = "neuralmagic/calibration"
DATASET_SPLIT = {"LLM": "train[:512]"}
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 2048
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42)
dampening_frac=0.05
def data_collator(batch):
assert len(batch) == 1, "Only batch size of 1 is supported for calibration"
item = batch[0]
collated = {}
import torch
for key, value in item.items():
if isinstance(value, torch.Tensor):
collated[key] = value.unsqueeze(0)
elif isinstance(value, list) and isinstance(value[0][0], int):
collated[key] = torch.tensor(value)
elif isinstance(value, list) and isinstance(value[0][0], float):
collated[key] = torch.tensor(value)
elif isinstance(value, list) and isinstance(value[0][0], torch.Tensor):
collated[key] = torch.stack(value)
elif isinstance(value, torch.Tensor):
collated[key] = value
else:
print(f"[WARN] Unrecognized type in collator for key={key}, type={type(value)}")
return collated
recipe = [
GPTQModifier(
targets="Linear",
ignore=["re:.*lm_head.*", "re:.*embed_tokens.*", "re:vision_tower.*", "re:multi_modal_projector.*"],
sequential_update=True,
sequential_targets=["Gemma3DecoderLayer"],
dampening_frac=dampening_frac,
)
]
SAVE_DIR=f"{model_id.split('/')[1]}-quantized.w4a16"
oneshot(
model=model,
tokenizer=model_id,
dataset=ds,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
trust_remote_code_model=True,
data_collator=data_collator,
output_dir=SAVE_DIR
)
Evaluation
The model was evaluated using lm_evaluation_harness for OpenLLM v1 text benchmark. The evaluations were conducted using the following commands:
Evaluation Commands
OpenLLM v1
lm_eval \
--model vllm \
--model_args pretrained="<model_name>",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=<n>,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True,enforce_eager=True \
--tasks openllm \
--batch_size auto
Accuracy
Category |
Metric |
google/gemma-3-4b-it |
RedHatAI/gemma-3-4b-it-quantized.w4a16 |
Recovery (%) |
OpenLLM V1 |
ARC Challenge |
56.57% |
56.57% |
100.00% |
OpenLLM V1 |
GSM8K |
76.12% |
72.33% |
95.02% |
OpenLLM V1 |
Hellaswag |
74.96% |
73.35% |
97.86% |
OpenLLM V1 |
MMLU |
58.38% |
56.33% |
96.49% |
OpenLLM V1 |
Truthfulqa (mc2) |
51.87% |
50.81% |
97.96% |
OpenLLM V1 |
Winogrande |
70.32% |
68.82% |
97.87% |
OpenLLM V1 |
Average Score |
64.70% |
63.04% |
97.42% |
Vision Evals |
MMMU (val) |
39.89% |
40.11% |
100.55% |
Vision Evals |
ChartQA |
50.76% |
49.32% |
97.16% |
Vision Evals |
Average Score |
45.33% |
44.72% |
98.86% |
đ License
The license of this model is gemma.