Model Overview
Model Features
Model Capabilities
Use Cases
đ Qwen2.5-VL-7B-Instruct-quantized-w8a8
This is a quantized version of Qwen/Qwen2.5-VL-7B-Instruct, which offers enhanced efficiency and performance for vision - text tasks.
đ Quick Start
This model can be deployed efficiently using the vLLM backend. Here is a simple example to get you started:
from vllm.assets.image import ImageAsset
from vllm import LLM, SamplingParams
# prepare model
llm = LLM(
model="neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w8a8",
trust_remote_code=True,
max_model_len=4096,
max_num_seqs=2,
)
# prepare inputs
question = "What is the content of this image?"
inputs = {
"prompt": f"<|user|>\n<|image_1|>\n{question}<|end|>\n<|assistant|>\n",
"multi_modal_data": {
"image": ImageAsset("cherry_blossom").pil_image.convert("RGB")
},
}
# generate response
print("========== SAMPLE GENERATION ==============")
outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64))
print(f"PROMPT : {outputs[0].prompt}")
print(f"RESPONSE: {outputs[0].outputs[0].text}")
print("==========================================")
vLLM also supports OpenAI - compatible serving. See the documentation for more details.
⨠Features
- Quantization Optimization: The model uses INT8 weight quantization and INT8 activation quantization, which significantly reduces memory usage and speeds up inference.
- Multimodal Support: It can handle vision - text inputs and generate text outputs, suitable for various multimodal tasks.
đĻ Installation
The installation mainly involves setting up the necessary libraries. You can install the required dependencies according to the official documentation of vLLM and transformers.
đģ Usage Examples
Basic Usage
The above quick - start example shows the basic usage of the model for vision - text question - answering tasks.
Advanced Usage
You can adjust the parameters in the SamplingParams
according to your specific requirements, such as changing the temperature
to control the randomness of the generated text, or adjusting the max_tokens
to limit the length of the generated text.
đ Documentation
Model Overview
- Model Architecture: Qwen/Qwen2.5 - VL - 7B - Instruct
- Input: Vision - Text
- Output: Text
- Model Optimizations:
- Weight quantization: INT8
- Activation quantization: INT8
- Release Date: 2/24/2025
- Version: 1.0
- Model Developers: Neural Magic
Model Optimizations
This model was obtained by quantizing the weights of Qwen/Qwen2.5-VL-7B-Instruct to INT8 data type, ready for inference with vLLM >= 0.5.2.
Creation
This model was created with [llm - compressor](https://github.com/vllm - project/llm - compressor) by running the following code snippet as part of a multimodal announcement blog:
Model Creation Code
import base64
from io import BytesIO
import torch
from datasets import load_dataset
from qwen_vl_utils import process_vision_info
from transformers import AutoProcessor
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.transformers import oneshot
from llmcompressor.transformers.tracing import (
TraceableQwen2_5_VLForConditionalGeneration,
)
# Load model.
model_id = "Qwen/Qwen2.5-VL-7B-Instruct"
model = TraceableQwen2_5_VLForConditionalGeneration.from_pretrained(
model_id,
device_map="auto",
torch_dtype="auto",
)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
# Oneshot arguments
DATASET_ID = "lmms-lab/flickr30k"
DATASET_SPLIT = {"calibration": "test[:512]"}
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 2048
# Load dataset and preprocess.
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42)
dampening_frac=0.01
# Apply chat template and tokenize inputs.
def preprocess_and_tokenize(example):
# preprocess
buffered = BytesIO()
example["image"].save(buffered, format="PNG")
encoded_image = base64.b64encode(buffered.getvalue())
encoded_image_text = encoded_image.decode("utf-8")
base64_qwen = f"data:image;base64,{encoded_image_text}"
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": base64_qwen},
{"type": "text", "text": "What does the image show?"},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
# tokenize
return processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=False,
max_length=MAX_SEQUENCE_LENGTH,
truncation=True,
)
ds = ds.map(preprocess_and_tokenize, remove_columns=ds["calibration"].column_names)
# Define a oneshot data collator for multimodal inputs.
def data_collator(batch):
assert len(batch) == 1
return {key: torch.tensor(value) for key, value in batch[0].items()}
# Recipe
recipe = [
GPTQModifier(
targets="Linear",
scheme="W8A8",
sequential_targets=["Qwen2_5_VLDecoderLayer"],
ignore=["lm_head", "re:visual.*"],
),
]
SAVE_DIR==f"{model_id.split('/')[1]}-quantized.w8a8"
# Perform oneshot
oneshot(
model=model,
tokenizer=model_id,
dataset=ds,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
trust_remote_code_model=True,
data_collator=data_collator,
output_dir=SAVE_DIR
)
Evaluation
The model was evaluated using [mistral - evals](https://github.com/neuralmagic/mistral - evals) for vision - related tasks and using [lm_evaluation_harness](https://github.com/neuralmagic/lm - evaluation - harness) for select text - based benchmarks. The evaluations were conducted using the following commands:
Evaluation Commands
Vision Tasks
- vqav2
- docvqa
- mathvista
- mmmu
- chartqa
vllm serve neuralmagic/pixtral-12b-quantized.w8a8 --tensor_parallel_size 1 --max_model_len 25000 --trust_remote_code --max_num_seqs 8 --gpu_memory_utilization 0.9 --dtype float16 --limit_mm_per_prompt image=7
python -m eval.run eval_vllm \
--model_name neuralmagic/pixtral-12b-quantized.w8a8 \
--url http://0.0.0.0:8000 \
--output_dir ~/tmp \
--eval_name <vision_task_name>
Text - based Tasks
MMLU
lm_eval \
--model vllm \
--model_args pretrained="<model_name>",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=<n>,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \
--tasks mmlu \
--num_fewshot 5 \
--batch_size auto \
--output_path output_dir
MGSM
lm_eval \
--model vllm \
--model_args pretrained="<model_name>",dtype=auto,max_model_len=4096,max_gen_toks=2048,max_num_seqs=128,tensor_parallel_size=<n>,gpu_memory_utilization=0.9 \
--tasks mgsm_cot_native \
--apply_chat_template \
--num_fewshot 0 \
--batch_size auto \
--output_path output_dir
Accuracy
Category | Metric | Qwen/Qwen2.5 - VL - 7B - Instruct | Qwen2.5 - VL - 7B - Instruct - quantized.w8a8 | Recovery (%) |
---|---|---|---|---|
Vision | MMMU (val, CoT) explicit_prompt_relaxed_correctness |
52.00 | 52.33 | 100.63% |
Vision | VQAv2 (val) vqa_match |
75.59 | 75.46 | 99.83% |
Vision | DocVQA (val) anls |
94.27 | 94.09 | 99.81% |
Vision | ChartQA (test, CoT) anywhere_in_answer_relaxed_correctness |
86.44 | 86.16 | 99.68% |
Vision | Mathvista (testmini, CoT) explicit_prompt_relaxed_correctness |
69.47 | 70.47 | 101.44% |
Vision | Average Score | 75.95 | 75.90 | 99.93% |
Text | MGSM (CoT) | 56.38 | 55.13 | 97.78% |
Text | MMLU (5 - shot) | 71.09 | 70.57 | 99.27% |
Inference Performance
This model achieves up to 1.56x speedup in single - stream deployment and 1.5x in multi - stream deployment, depending on hardware and use - case scenario. The following performance benchmarks were conducted with vLLM version 0.7.2, and GuideLLM.
Benchmarking Command
``` guidellm --model neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w8a8 --target "http://localhost:8000/v1" --data-type emulated --data prompt_tokens=Single - stream performance (measured with vLLM version 0.7.2)
Hardware | Model | Average Cost Reduction | Document Visual Question Answering 1680W x 2240H 64/128 Latency (s) |
Document Visual Question Answering 1680W x 2240H 64/128 Queries Per Dollar |
Visual Reasoning 640W x 480H 128/128 Latency (s) |
Visual Reasoning 640W x 480H 128/128 Queries Per Dollar |
Image Captioning 480W x 360H 0/128 Latency (s) |
Image Captioning 480W x 360H 0/128 Queries Per Dollar |
---|---|---|---|---|---|---|---|---|
A6000x1 | Qwen/Qwen2.5 - VL - 7B - Instruct | 4.9 | 912 | 3.2 | 1386 | 3.1 | 1431 | |
A6000x1 | neuralmagic/Qwen2.5 - VL - 7B - Instruct - quantized.w8a8 | 1.50 | 3.6 | 1248 | 2.1 | 2163 | 2.0 | 2237 |
A6000x1 | neuralmagic/Qwen2.5 - VL - 7B - Instruct - quantized.w4a16 | 2.05 | 3.3 | 1351 | 1.4 | 3252 | 1.4 | 3321 |
A100x1 | Qwen/Qwen2.5 - VL - 7B - Instruct | 2.8 | 707 | 1.7 | 1162 | 1.7 | 1198 | |
A100x1 | neuralmagic/Qwen2.5 - VL - 7B - Instruct - quantized.w8a8 | 1.24 | 2.4 | 851 | 1.4 | 1454 | 1.3 | 1512 |
A100x1 | neuralmagic/Qwen2.5 - VL - 7B - Instruct - quantized.w4a16 | 1.49 | 2.2 | 912 | 1.1 | 1791 | 1.0 | 1950 |
H100x1 | Qwen/Qwen2.5 - VL - 7B - Instruct | 2.0 | 557 | 1.2 | 919 | 1.2 | 941 | |
H100x1 | neuralmagic/Qwen2.5 - VL - 7B - Instruct - FP8 - Dynamic | 1.28 | 1.6 | 698 | 0.9 | 1181 | 0.9 | 1219 |
H100x1 | neuralmagic/Qwen2.5 - VL - 7B - Instruct - quantized.w4a16 | 1.28 | 1.6 | 686 | 0.9 | 1191 | 0.9 | 1228 |
**Use case profiles: Image Size (WxH) / prompt tokens / generation tokens
**QPD: Queries per dollar, based on on - demand cost at [Lambda Labs](https://lambdalabs.com/service/gpu - cloud) (observed on 2/18/2025).
Multi - stream asynchronous performance (measured with vLLM version 0.7.2)
Hardware | Model | Average Cost Reduction | Document Visual Question Answering 1680W x 2240H 64/128 Maximum throughput (QPS) |
Document Visual Question Answering 1680W x 2240H 64/128 Queries Per Dollar |
Visual Reasoning 640W x 480H 128/128 Maximum throughput (QPS) |
Visual Reasoning 640W x 480H 128/128 Queries Per Dollar |
Image Captioning 480W x 360H 0/128 Maximum throughput (QPS) |
Image Captioning 480W x 360H 0/128 Queries Per Dollar |
---|---|---|---|---|---|---|---|---|
A6000x1 | Qwen/Qwen2.5 - VL - 7B - Instruct | 0.4 | 1837 | 1.5 | 6846 | 1.7 | 7638 | |
A6000x1 | neuralmagic/Qwen2.5 - VL - 7B - Instruct - quantized.w8a8 | 1.41 | 0.5 | 2297 | 2.3 | 10137 | 2.5 | 11472 |
A6000x1 | neuralmagic/Qwen2.5 - VL - 7B - Instruct - quantized.w4a16 | 1.60 | 0.4 | 1828 | 2.7 | 12254 | 3.4 | 15477 |
A100x1 | Qwen/Qwen2.5 - VL - 7B - Instruct | 0.7 | 1347 | 2.6 | 5221 | 3.0 | 6122 | |
A100x1 | neuralmagic/Qwen2.5 - VL - 7B - Instruct - quantized.w8a8 | 1.27 | 0.8 | 1639 | 3.4 | 6851 | 3.9 | 7918 |
A100x1 | neuralmagic/Qwen2.5 - VL - 7B - Instruct - quantized.w4a16 | 1.21 | 0.7 | 1314 | 3.0 | 5983 | 4.6 | 9206 |
H100x1 | Qwen/Qwen2.5 - VL - 7B - Instruct | 0.9 | 969 | 3.1 | 3358 | 3.3 | 3615 | |
H100x1 | neuralmagic/Qwen2.5 - VL - 7B - Instruct - FP8 - Dynamic | 1.29 | 1.2 | 1331 | 3.8 | 4109 | 4.2 | 4598 |
H100x1 | neuralmagic/Qwen2.5 - VL - 7B - Instruct - quantized.w4a16 | 1.28 | 1.2 | 1298 | 3.8 | 4190 | 4.2 | 4573 |
**Use case profiles: Image Size (WxH) / prompt tokens / generation tokens
**QPS: Queries per second.
**QPD: Queries per dollar, based on on - demand cost at [Lambda Labs](https://lambdalabs.com/service/gpu - cloud) (observed on 2/18/2025).
đ License
This model is licensed under the [Apache - 2.0](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache - 2.0.md) license.






