Model Overview
Model Features
Model Capabilities
Use Cases
đ pixtral-12b-quantized.w8a8
A quantized version of mgoin/pixtral-12b, optimized for efficient deployment with vLLM.
đ Quick Start
This quantized model can be deployed efficiently using the vLLM backend. Here is a simple example to get you started:
from vllm.assets.image import ImageAsset
from vllm import LLM, SamplingParams
# prepare model
llm = LLM(
model="neuralmagic/pixtral-12b-quantized.w8a8",
trust_remote_code=True,
max_model_len=4096,
max_num_seqs=2,
)
# prepare inputs
question = "What is the content of this image?"
inputs = {
"prompt": f"<|user|>\n<|image_1|>\n{question}<|end|>\n<|assistant|>\n",
"multi_modal_data": {
"image": ImageAsset("cherry_blossom").pil_image.convert("RGB")
},
}
# generate response
print("========== SAMPLE GENERATION ==============")
outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64))
print(f"PROMPT : {outputs[0].prompt}")
print(f"RESPONSE: {outputs[0].outputs[0].text}")
print("==========================================")
⨠Features
- Quantization Optimization: Both weight and activation are quantized to INT8, making it ready for inference with vLLM >= 0.5.2.
- Multimodal Support: Capable of handling vision-text inputs and generating text outputs.
- Efficient Deployment: Can be deployed efficiently using the vLLM backend.
đĻ Installation
No specific installation steps are provided in the original README. However, you need to have vLLM and other necessary dependencies installed to use this model.
đģ Usage Examples
Basic Usage
The basic usage example is shown above in the Quick Start section.
Advanced Usage
You can refer to the vLLM documentation for more advanced deployment and usage scenarios.
đ Documentation
Model Overview
- Model Architecture: mgoin/pixtral-12b
- Input: Vision-Text
- Output: Text
- Model Optimizations:
- Weight quantization: INT8
- Activation quantization: INT8
- Release Date: 2/24/2025
- Version: 1.0
- Model Developers: Neural Magic
This model is a quantized version of mgoin/pixtral-12b, obtained by quantizing the weights to INT8 data type, which is ready for inference with vLLM >= 0.5.2.
Deployment
This model can be deployed efficiently using the vLLM backend. vLLM also supports OpenAI-compatible serving. See the documentation for more details.
Creation
This model was created with llm-compressor by running the following code snippet as part of a multimodal announcement blog:
Model Creation Code
import requests
import torch
from PIL import Image
from transformers import AutoProcessor
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.transformers import oneshot
from llmcompressor.transformers.tracing import TraceableLlavaForConditionalGeneration
# Load model.
model_id = mgoin/pixtral-12b
model = TraceableLlavaForConditionalGeneration.from_pretrained(
model_id, device_map="auto", torch_dtype="auto"
)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
# Oneshot arguments
DATASET_ID = "flickr30k"
DATASET_SPLIT = {"calibration": "test[:512]"}
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 2048
# Define a oneshot data collator for multimodal inputs.
def data_collator(batch):
assert len(batch) == 1
return {
"input_ids": torch.LongTensor(batch[0]["input_ids"]),
"attention_mask": torch.tensor(batch[0]["attention_mask"]),
"pixel_values": torch.tensor(batch[0]["pixel_values"]),
}
# Recipe
recipe = [
GPTQModifier(
targets="Linear",
scheme="W8A8",
sequential_targets=["MistralDecoderLayer"],
ignore=["re:.*lm_head", "re:vision_tower.*", "re:multi_modal_projector.*"],
),
]
SAVE_DIR=f"{model_id.split('/')[1]}-quantized.w8a8"
# Perform oneshot
oneshot(
model=model,
tokenizer=model_id,
dataset=DATASET_ID,
splits=DATASET_SPLIT,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
trust_remote_code_model=True,
data_collator=data_collator,
output_dir=SAVE_DIR
)
Evaluation
The model was evaluated using mistral-evals for vision-related tasks and using lm_evaluation_harness for select text-based benchmarks. The evaluations were conducted using the following commands:
Evaluation Commands
Vision Tasks
- vqav2
- docvqa
- mathvista
- mmmu
- chartqa
vllm serve neuralmagic/pixtral-12b-quantized.w8a8 --tensor_parallel_size 1 --max_model_len 25000 --trust_remote_code --max_num_seqs 8 --gpu_memory_utilization 0.9 --dtype float16 --limit_mm_per_prompt image=7
python -m eval.run eval_vllm \
--model_name neuralmagic/pixtral-12b-quantized.w8a8 \
--url http://0.0.0.0:8000 \
--output_dir ~/tmp
--eval_name <vision_task_name>
Text-based Tasks
MMLU
lm_eval \
--model vllm \
--model_args pretrained="<model_name>",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=<n>,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \
--tasks mmlu \
--num_fewshot 5
--batch_size auto \
--output_path output_dir \
HumanEval
Generation
python3 codegen/generate.py \
--model neuralmagic/pixtral-12b-quantized.w8a8 \
--bs 16 \
--temperature 0.2 \
--n_samples 50 \
--root "." \
--dataset humaneval
Sanitization
python3 evalplus/sanitize.py \
humaneval/neuralmagic/pixtral-12b-quantized.w8a8_vllm_temp_0.2
Evaluation
evalplus.evaluate \
--dataset humaneval \
--samples humaneval/neuralmagic/pixtral-12b-quantized.w8a8_vllm_temp_0.2-sanitized
Accuracy
Category | Metric | mgoin/pixtral-12b | neuralmagic/pixtral-12b-quantized.w8a8 | Recovery (%) |
---|---|---|---|---|
Vision | MMMU (val, CoT) explicit_prompt_relaxed_correctness |
48.00 | 46.22 | 96.29% |
Vision | VQAv2 (val) vqa_match |
78.71 | 78.00 | 99.10% |
Vision | DocVQA (val) anls |
89.47 | 89.35 | 99.87% |
Vision | ChartQA (test, CoT) anywhere_in_answer_relaxed_correctness |
81.68 | 81.60 | 99.90% |
Vision | Mathvista (testmini, CoT) explicit_prompt_relaxed_correctness |
56.50 | 57.30 | 101.42% |
Vision | Average Score | 70.07 | 70.09 | 100.03% |
Text | HumanEval pass@1 |
68.40 | 66.39 | 97.06% |
Text | MMLU (5-shot) | 71.40 | 70.50 | 98.74% |
Inference Performance
This model achieves up to 1.57x speedup in single-stream deployment and up to 1.53x speedup in multi-stream asynchronous deployment, depending on hardware and use-case scenario. The following performance benchmarks were conducted with vLLM version 0.7.2, and GuideLLM.
Benchmarking Command
``` guidellm --model neuralmagic/pixtral-12b-quantized.w8a8 --target "http://localhost:8000/v1" --data-type emulated --data prompt_tokens=Single-stream performance (measured with vLLM version 0.7.2)
Hardware | Model | Average Cost Reduction | Document Visual Question Answering 1680W x 2240H 64/128 Latency (s) |
Document Visual Question Answering 1680W x 2240H 64/128 Queries Per Dollar |
Visual Reasoning 640W x 480H 128/128 Latency (s) |
Visual Reasoning 640W x 480H 128/128 Queries Per Dollar |
Image Captioning 480W x 360H 0/128 Latency (s) |
Image Captioning 480W x 360H 0/128 Queries Per Dollar |
---|---|---|---|---|---|---|---|---|
A6000x1 | mgoin/pixtral-12b | 5.7 | 796 | 4.8 | 929 | 4.7 | 964 | |
A6000x1 | neuralmagic/pixtral-12b-quantized.w8a8 | 1.55 | 3.7 | 1220 | 3.1 | 1437 | 3.0 | 1511 |
A6000x1 | neuralmagic/pixtral-12b-quantized.w4a16 | 2.16 | 3.2 | 1417 | 2.1 | 2093 | 1.9 | 2371 |
A100x1 | mgoin/pixtral-12b | 3.0 | 676 | 2.4 | 825 | 2.3 | 859 | |
A100x1 | neuralmagic/pixtral-12b-quantized.w8a8 | 1.38 | 2.2 | 904 | 1.7 | 1159 | 1.7 | 1201 |
A100x1 | neuralmagic/pixtral-12b-quantized.w4a16 | 1.83 | 1.8 | 1096 | 1.3 | 1557 | 1.2 | 1702 |
H100x1 | mgoin/pixtral-12b | 1.8 | 595 | 1.5 | 732 | 1.4 | 764 | |
H100x1 | neuralmagic/pixtral-12b-FP8-Dynamic | 1.35 | 1.4 | 767 | 1.1 | 1008 | 1.0 | 1056 |
H100x1 | neuralmagic/pixtral-12b-quantized.w4a16 | 1.37 | 1.4 | 787 | 1.1 | 1018 | 1.0 | 1065 |
**Use case profiles: Image Size (WxH) / prompt tokens / generation tokens
**QPD: Queries per dollar, based on on-demand cost at Lambda Labs (observed on 2/18/2025).
Multi-stream asynchronous performance (measured with vLLM version 0.7.2)
Hardware | Model | Average Cost Reduction | Document Visual Question Answering 1680W x 2240H 64/128 Maximum throughput (QPS) |
Document Visual Question Answering 1680W x 2240H 64/128 Queries Per Dollar |
Visual Reasoning 640W x 480H 128/128 Maximum throughput (QPS) |
Visual Reasoning 640W x 480H 128/128 Queries Per Dollar |
Image Captioning 480W x 360H 0/128 Maximum throughput (QPS) |
Image Captioning 480W x 360H 0/128 Queries Per Dollar |
---|---|---|---|---|---|---|---|---|
A6000x1 | mgoin/pixtral-12b | 0.6 | 2632 | 0.9 | 4108 | 1.1 | 4774 | |
A6000x1 | neuralmagic/pixtral-12b-quantized.w8a8 | 1.50 | 0.9 | 3901 | 1.4 | 6160 | 1.6 | 7292 |
A6000x1 | neuralmagic/pixtral-12b-quantized.w4a16 | 1.41 | 0.6 | 2890 | 1.3 | 5758 | 1.8 | 8312 |
A100x1 | mgoin/pixtral-12b | 1.1 | 2291 | 1.8 | 3670 | 2.1 | 4284 | |
A100x1 | neuralmagic/pixtral-12b-quantized.w8a8 | 1.38 | 1.5 | 3096 | 2.5 | 5076 | 3.0 | 5965 |
A100x1 | neuralmagic/pixtral-12b-quantized.w4a16 | 1.40 | 1.4 | 2728 | 2.6 | 5133 | 3.5 | 6943 |
H100x1 | BF16 | 2.6 | 2877 | 4.0 | 4372 | 4.7 | 5095 | |
H100x1 | neuralmagic/pixtral-12b-FP8-Dynamic | 1.33 | 3.4 | 3753 | 5.4 | 5862 | 6.3 | 6917 |
H100x1 | neuralmagic/pixtral-12b-quantized.w4a16 | 1.22 | 2.8 | 3115 | 5.0 | 5511 | 6.2 | 6777 |
**Use case profiles: Image Size (WxH) / prompt tokens / generation tokens
**QPS: Queries per second.
**QPD: Queries per dollar, based on on-demand cost at Lambda Labs (observed on 2/18/2025).
đ§ Technical Details
The technical details mainly include the quantization process and the evaluation methods. The quantization is achieved using llm-compressor, and the evaluation is conducted using mistral-evals and lm_evaluation_harness.
đ License
This model is licensed under the Apache-2.0 license.






