Model Overview
Model Features
Model Capabilities
Use Cases
đ Llama-3.2-3B-Instruct-FP8-dynamic
This is a quantized version of Llama-3.2-3B-Instruct. It is optimized for inference with vLLM and supports multiple languages. It can be used for commercial and research purposes, especially in assistant-like chat scenarios.
đ Quick Start
Use with vLLM
This model can be deployed efficiently using the vLLM backend, as shown in the example below.
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "neuralmagic/Llama-3.2-3B-Instruct-FP8-dynamic"
sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=256)
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompts = tokenizer.apply_chat_template(messages, tokenize=False)
llm = LLM(model=model_id)
outputs = llm.generate(prompts, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
vLLM also supports OpenAI-compatible serving. See the documentation for more details.
⨠Features
- Model Architecture: Meta-Llama-3.2, taking text as input and outputting text.
- Model Optimizations:
- Weight quantization: FP8
- Activation quantization: FP8
- Multilingual Support: Supports languages such as English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.
- Intended Use Cases: Intended for commercial and research use in multiple languages, especially for assistant-like chat.
đĻ Installation
No specific installation steps are provided in the original document.
đģ Usage Examples
Basic Usage
The basic usage example is shown in the "Quick Start" section above.
Advanced Usage
The creation process of this model is shown in the following code snippet, which can be regarded as an advanced usage example.
import torch
from transformers import AutoTokenizer
from llmcompressor.transformers import SparseAutoModelForCausalLM, oneshot
from llmcompressor.transformers.compression.helpers import ( # noqa
calculate_offload_device_map,
custom_offload_device_map,
)
recipe = """
quant_stage:
quant_modifiers:
QuantizationModifier:
ignore: ["lm_head"]
config_groups:
group_0:
weights:
num_bits: 8
type: float
strategy: channel
dynamic: false
symmetric: true
input_activations:
num_bits: 8
type: float
strategy: token
dynamic: true
symmetric: true
targets: ["Linear"]
"""
model_stub = "meta-llama/Llama-3.2-3B-Instruct"
model_name = model_stub.split("/")[-1]
device_map = calculate_offload_device_map(
model_stub, reserve_for_hessians=False, num_gpus=1, torch_dtype="auto"
)
model = SparseAutoModelForCausalLM.from_pretrained(
model_stub, torch_dtype="auto", device_map=device_map
)
output_dir = f"./{model_name}-FP8-dynamic"
oneshot(
model=model,
recipe=recipe,
output_dir=output_dir,
save_compressed=True,
tokenizer=AutoTokenizer.from_pretrained(model_stub),
)
đ Documentation
Model Overview
- Model Architecture: Meta-Llama-3.2
- Input: Text
- Output: Text
- Model Optimizations:
- Weight quantization: FP8
- Activation quantization: FP8
- Intended Use Cases: Intended for commercial and research use in multiple languages. Similar to Llama-3.2-3B-Instruct, this model is intended for assistant-like chat.
- Out-of-scope: Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
- Release Date: 9/25/2024
- Version: 1.0
- License(s): llama3.2
- Model Developers: Neural Magic
This model achieves an average score of 50.88 on a subset of tasks from the OpenLLM benchmark (version 1), whereas the unquantized model achieves 51.70.
Model Optimizations
This model was obtained by quantizing the weights and activations of Llama-3.2-3B-Instruct to FP8 data type, ready for inference with vLLM built from source. This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%.
Only the weights and activations of the linear operators within transformers blocks are quantized. Symmetric per-channel quantization is applied, in which a linear scaling per output dimension maps the FP8 representations of the quantized weights and activations. Activations are also quantized on a per-token dynamic basis. LLM Compressor is used for quantization.
Creation
This model was created by applying LLM Compressor, as presented in the code snippet above.
Evaluation
The model was evaluated on MMLU, ARC-Challenge, GSM-8K, and Winogrande. Evaluation was conducted using the Neural Magic fork of lm-evaluation-harness (branch llama_3.1_instruct) and the vLLM engine. This version of the lm-evaluation-harness includes versions of ARC-Challenge, GSM-8K, MMLU, and MMLU-cot that match the prompting style of Meta-Llama-3.1-Instruct-evals.
Accuracy
Open LLM Leaderboard evaluation scores
Benchmark | Llama-3.2-3B-Instruct | Llama-3.2-3B-Instruct-FP8-dynamic (this model) | Recovery |
---|---|---|---|
MMLU (5-shot) | 62.98 | 62.95 | 100.0% |
MMLU-cot (0-shot) | 65.40 | 65.23 | 99.7% |
ARC Challenge (0-shot) | 77.13 | 76.71 | 99.4% |
GSM-8K-cot (8-shot, strict-match) | 77.94 | 76.72 | 98.4% |
Winogrande (5-shot) | 71.11 | 71.11 | 100.0% |
Hellaswag (10-shot) | 73.62 | 73.54 | 99.9% |
TruthfulQA (0-shot, mc2) | 51.47 | 51.06 | 99.2% |
Average | 68.52 | 68.19 | 99.5% |
Reproduction
The results were obtained using the following commands:
MMLU
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Llama-3.2-3B-Instruct-FP8-dynamic",dtype=auto,max_model_len=3850,max_gen_toks=10,tensor_parallel_size=1 \
--tasks mmlu_llama_3.1_instruct \
--fewshot_as_multiturn \
--apply_chat_template \
--num_fewshot 5 \
--batch_size auto
MMLU-CoT
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Llama-3.2-3B-Instruct-FP8-dynamic",dtype=auto,max_model_len=4064,max_gen_toks=1024,tensor_parallel_size=1 \
--tasks mmlu_cot_0shot_llama_3.1_instruct \
--apply_chat_template \
--num_fewshot 0 \
--batch_size auto
ARC-Challenge
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Llama-3.2-3B-Instruct-FP8-dynamic",dtype=auto,max_model_len=3940,max_gen_toks=100,tensor_parallel_size=1 \
--tasks arc_challenge_llama_3.1_instruct \
--apply_chat_template \
--num_fewshot 0 \
--batch_size auto
GSM-8K
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Llama-3.2-3B-Instruct-FP8-dynamic",dtype=auto,max_model_len=4096,max_gen_toks=1024,tensor_parallel_size=1 \
--tasks gsm8k_cot_llama_3.1_instruct \
--fewshot_as_multiturn \
--apply_chat_template \
--num_fewshot 8 \
--batch_size auto
Hellaswag
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Llama-3.2-3B-Instruct-FP8-dynamic",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks hellaswag \
--num_fewshot 10 \
--batch_size auto
Winogrande
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Llama-3.2-3B-Instruct-FP8-dynamic",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks winogrande \
--num_fewshot 5 \
--batch_size auto
TruthfulQA
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Llama-3.2-3B-Instruct-FP8-dynamic",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks truthfulqa \
--num_fewshot 0 \
--batch_size auto
đ§ Technical Details
This model was obtained by quantizing the weights and activations of Llama-3.2-3B-Instruct to FP8 data type. Only the weights and activations of the linear operators within transformers blocks are quantized. Symmetric per-channel quantization is applied, and activations are quantized on a per-token dynamic basis. LLM Compressor is used for quantization. This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%.
đ License
This model is licensed under llama3.2.

