Model Overview
Model Features
Model Capabilities
Use Cases
đ Llama-3.3-70B-Instruct-quantized.w8a8
A quantized version of Llama-3.3-70B-Instruct, optimized for efficient deployment and maintaining high performance across multiple languages.
đ Quick Start
This quantized model, Llama-3.3-70B-Instruct-quantized.w8a8, is a great choice for commercial and research use in multiple languages. It's designed for assistant - like chat scenarios, similar to its base model Llama-3.3-70B-Instruct.
⨠Features
- Model Architecture: Based on the Llama architecture, taking text as input and generating text as output.
- Model Optimizations:
- Activation quantization: INT8
- Weight quantization: INT8
- These optimizations reduce GPU memory requirements by approximately 50% and increase matrix - multiply compute throughput by approximately 2x. Weight quantization also cuts down disk size requirements by about 50%.
- Multilingual Support: Supports languages such as English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.
- High - Quality Performance: Achieves 99.4% recovery for OpenLLM v1 (using Meta's prompting when available) and 100% for both HumanEval and HumanEval+ pass@1.
đĻ Installation
The model can be deployed efficiently using the vLLM backend. Here is an example of how to use it:
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8"
number_gpus = 1
max_model_len = 8192
sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=256)
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompts = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
llm = LLM(model=model_id, tensor_parallel_size=number_gpus, max_model_len=max_model_len)
outputs = llm.generate(prompts, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
vLLM also supports OpenAI - compatible serving. Refer to the documentation for more details.
Deployment on Different Platforms
Deploy on Red Hat AI Inference Server
podman run --rm -it --device nvidia.com/gpu=all -p 8000:8000 \
--ipc=host \
--env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \
--env "HF_HUB_OFFLINE=0" -v ~/.cache/vllm:/home/vllm/.cache \
--name=vllm \
registry.access.redhat.com/rhaiis/rh-vllm-cuda \
vllm serve \
--tensor-parallel-size 8 \
--max-model-len 32768 \
--enforce-eager --model RedHatAI/Llama-3.3-70B-Instruct-quantized.w8a8
See Red Hat AI Inference Server documentation for more details.
Deploy on Red Hat Enterprise Linux AI
# Download model from Red Hat Registry via docker
# Note: This downloads the model to ~/.cache/instructlab/models unless --model-dir is specified.
ilab model download --repository docker://registry.redhat.io/rhelai1/llama-3-3-70b-instruct-quantized-w8a8:1.5
# Serve model via ilab
ilab model serve --model-path ~/.cache/instructlab/models/llama-3-3-70b-instruct-quantized-w8a8
# Chat with model
ilab model chat --model ~/.cache/instructlab/models/llama-3-3-70b-instruct-quantized-w8a8
See Red Hat Enterprise Linux AI documentation for more details.
Deploy on Red Hat Openshift AI
# Setting up vllm server with ServingRuntime
# Save as: vllm-servingruntime.yaml
apiVersion: serving.kserve.io/v1alpha1
kind: ServingRuntime
metadata:
name: vllm-cuda-runtime # OPTIONAL CHANGE: set a unique name
annotations:
openshift.io/display-name: vLLM NVIDIA GPU ServingRuntime for KServe
opendatahub.io/recommended-accelerators: '["nvidia.com/gpu"]'
labels:
opendatahub.io/dashboard: 'true'
spec:
annotations:
prometheus.io/port: '8080'
prometheus.io/path: '/metrics'
multiModel: false
supportedModelFormats:
- autoSelect: true
name: vLLM
containers:
- name: kserve-container
image: quay.io/modh/vllm:rhoai-2.20-cuda # CHANGE if needed. If AMD: quay.io/modh/vllm:rhoai-2.20-rocm
command:
- python
- -m
- vllm.entrypoints.openai.api_server
args:
- "--port=8080"
- "--model=/mnt/models"
- "--served-model-name={{.Name}}"
env:
- name: HF_HOME
value: /tmp/hf_home
ports:
- containerPort: 8080
protocol: TCP
# Attach model to vllm server. This is an NVIDIA template
# Save as: inferenceservice.yaml
apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
annotations:
openshift.io/display-name: llama-3-3-70b-instruct-quantized-w8a8 # OPTIONAL CHANGE
serving.kserve.io/deploymentMode: RawDeployment
name: llama-3-3-70b-instruct-quantized-w8a8 # specify model name. This value will be used to invoke the model in the payload
labels:
opendatahub.io/dashboard: 'true'
spec:
predictor:
maxReplicas: 1
minReplicas: 1
model:
modelFormat:
name: vLLM
name: ''
resources:
limits:
cpu: '2' # this is model specific
memory: 8Gi # this is model specific
nvidia.com/gpu: '1' # this is accelerator specific
requests: # same comment for this block
cpu: '1'
memory: 4Gi
nvidia.com/gpu: '1'
runtime: vllm-cuda-runtime # must match the ServingRuntime name above
storageUri: oci://registry.redhat.io/rhelai1/modelcar-llama-3-3-70b-instruct-quantized-w8a8:1.5
tolerations:
- effect: NoSchedule
key: nvidia.com/gpu
operator: Exists
# make sure first to be in the project where you want to deploy the model
# oc project <project-name>
# apply both resources to run model
# Apply the ServingRuntime
oc apply -f vllm-servingruntime.yaml
# Apply the InferenceService
oc apply -f qwen-inferenceservice.yaml
# Replace <inference-service-name> and <cluster-ingress-domain> below:
# - Run `oc get inferenceservice` to find your URL if unsure.
# Call the server using curl:
curl https://<inference-service-name>-predictor-default.<domain>/v1/chat/completions
-H "Content-Type: application/json" \
-d '{
"model": "llama-3-3-70b-instruct-quantized-w8a8 ",
"stream": true,
"stream_options": {
"include_usage": true
},
"max_tokens": 1,
"messages": [
{
"role": "user",
"content": "How can a bee fly when its wings are so small?"
}
]
}'
See Red Hat Openshift AI documentation for more details.
đģ Usage Examples
Basic Usage
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8"
number_gpus = 1
max_model_len = 8192
sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=256)
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompts = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
llm = LLM(model=model_id, tensor_parallel_size=number_gpus, max_model_len=max_model_len)
outputs = llm.generate(prompts, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
Advanced Usage
# Advanced usage can involve customizing more parameters in the SamplingParams,
# or using different deployment setups for specific requirements.
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8"
number_gpus = 4 # Increase the number of GPUs for higher throughput
max_model_len = 16384 # Increase the maximum sequence length
sampling_params = SamplingParams(temperature=0.8, top_p=0.95, max_tokens=512)
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
{"role": "system", "content": "You are an expert in multiple languages and can provide detailed explanations."},
{"role": "user", "content": "Explain the concept of quantum mechanics in simple terms."},
]
prompts = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
llm = LLM(model=model_id, tensor_parallel_size=number_gpus, max_model_len=max_model_len)
outputs = llm.generate(prompts, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
đ Documentation
Model Overview
- Model Architecture: Llama
- Input: Text
- Output: Text
- Model Optimizations:
- Activation quantization: INT8
- Weight quantization: INT8
- Intended Use Cases: Intended for commercial and research use in multiple languages, especially for assistant - like chat.
- Out - of - scope: Use in any manner that violates applicable laws or regulations (including trade compliance laws).
- Release Date: 01/20/2025
- Version: 1.0
- Model Developers: Neural Magic
Model Optimizations
This model was obtained by quantizing the weights and activations of Llama-3.3-70B-Instruct to INT8 data type. Only weights and activations of the linear operators within transformers blocks are quantized. Weights are quantized with a symmetric static per - channel scheme, and activations are quantized with a symmetric dynamic per - token scheme.
Creation
This model was created by using the llm - compressor library. Here is the code snippet:
from transformers import AutoTokenizer, AutoModelForCausalLM
from datasets import Dataset
from llmcompressor.transformers import oneshot
from llmcompressor.modifiers.quantization import GPTQModifier
import random
model_id = "meta-llama/Meta-Llama-3.1-8B-Instruct"
num_samples = 1024
max_seq_len = 8192
tokenizer = AutoTokenizer.from_pretrained(model_id)
max_token_id = len(tokenizer.get_vocab()) - 1
input_ids = [[random.randint(0, max_token_id) for _ in range(max_seq_len)] for _ in range(num_samples)]
attention_mask = num_samples * [max_seq_len * [1]]
ds = Dataset.from_dict({"input_ids": input_ids, "attention_mask": attention_mask})
recipe = GPTQModifier(
targets="Linear",
scheme="W8A8",
ignore=["lm_head"],
dampening_frac=0.01,
)
model = SparseAutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
)
oneshot(
model=model,
dataset=ds,
recipe=recipe,
max_seq_length=max_seq_len,
num_calibration_samples=num_samples,
)
model.save_pretrained("Llama-3.3-70B-Instruct-quantized.w8a8")
Evaluation
This model was evaluated on well - known benchmarks such as OpenLLM v1, OpenLLM v2, HumanEval, and HumanEval+. In all cases, model outputs were generated with the vLLM engine.
Accuracy
Property | Details |
---|---|
Model Type | Llama architecture, quantized to INT8 for weights and activations |
Training Data | Not specified in the original document |
Category | Benchmark | Llama-3.3-70B-Instruct | Llama-3.3-70B-Instruct-quantized.w8a8 (this model) | Recovery |
---|---|---|---|---|
OpenLLM v1 | MMLU (5 - shot) | 81.60 | 81.19 | 99.5% |
OpenLLM v1 | MMLU (CoT, 0 - shot) | 86.58 | 85.92 | 99.2% |
OpenLLM v1 | ARC Challenge (0 - shot) | 49.23 | 48.04 | 97.6% |
OpenLLM v1 | GSM - 8K (CoT, 8 - shot, strict - match) | 94.16 | 94.01 | 99.8% |
OpenLLM v1 | Hellaswag (10 - shot) | 86.49 | 86.47 | 100.0% |
OpenLLM v1 | Winogrande (5 - shot) | 84.77 | 83.74 | 98.8% |
OpenLLM v1 | TruthfulQA (0 - shot, mc2) | 62.75 | 63.09 | 99.5% |
OpenLLM v1 | Average | 77.94 | 77.49 | 99.4% |
OpenLLM v2 | MMLU - Pro (5 - shot) | 51.89 | 51.59 | 99.7% |
OpenLLM v2 | IFEval (0 - shot) | 90.89 | 90.68 | 99.4% |
OpenLLM v2 | BBH (3 - shot) | 63.15 | 62.54 | 99.0% |
OpenLLM v2 | Math - lvl - 5 (4 - shot) | 0.17 | 0.00 | N/A |
OpenLLM v2 | GPQA (0 - shot) | 46.10 | 46.44 | 100.8% |
OpenLLM v2 | MuSR (0 - shot) | 44.35 | 44.34 | 100.0% |
OpenLLM v2 | Average | 49.42 | 49.27 | 99.7% |
Coding | HumanEval pass@1 | 83.20 | 83.30 | 100.1% |
Coding | HumanEval+ pass@1 | 78.40 | 78.60 | 100.3% |
Multilingual | Portuguese MMLU (5 - shot) | 79.76 | 79.47 | 99.6% |
Multilingual | Spanish MMLU (5 - shot) | 79.33 | 79.23 | 99.9% |
Multilingual | Italian MMLU (5 - shot) | 79.15 | 78.80 | 99.6% |
Multilingual | German MMLU (5 - shot) | 77.94 | 77.92 | 100.0% |
Multilingual | French MMLU (5 - shot) | 75.69 | 75.79 | 100.1% |
Multilingual | Hindi MMLU (5 - shot) | 73.81 | 73.49 | 99.6% |
Multilingual | Thai MMLU (5 - shot) | 71.97 | 71.44 | 99.2% |
Reproduction
The results were obtained using the following commands:
MMLU
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=3850,max_gen_toks=10,tensor_parallel_size=1 \
--tasks mmlu_llama_3.1_instruct \
--fewshot_as_multiturn \
--apply_chat_template \
--num_fewshot 5 \
--batch_size auto
MMLU - CoT
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=4064,max_gen_toks=1024,tensor_parallel_size=1 \
--tasks mmlu_cot_0shot_llama_3.1_instruct \
--apply_chat_template \
--num_fewshot 0 \
--batch_size auto
ARC - Challenge
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=3940,max_gen_toks=100,tensor_parallel_size=1 \
--tasks arc_challenge_llama_3.1_instruct \
--apply_chat_template \
--num_fewshot 0 \
--batch_size auto
GSM - 8K
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=4096,max_gen_toks=1024,tensor_parallel_size=1 \
--tasks gsm8k_cot_llama_3.1_instruct \
--fewshot_as_multiturn \
--apply_chat_template \
--num_fewshot 8 \
--batch_size auto
Hellaswag
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks hellaswag \
--num_fewshot 10 \
--batch_size auto
Winogrande
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks winogrande \
--num_fewshot 5 \
--batch_size auto
TruthfulQA
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks truthfulqa \
--num_fewshot 0 \
--batch_size auto
OpenLLM v2
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=4096,tensor_parallel_size=1,enable_chunked_prefill=True \
--apply_chat_template \
--fewshot_as_multiturn \
--tasks leaderboard \
--batch_size auto
MMLU Portuguese
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=3850,max_gen_toks=10,tensor_parallel_size=1 \
--tasks mmlu_pt_llama_3.1_instruct \
--fewshot_as_multiturn \
--apply_chat_template \
--num_fewshot 5 \
--batch_size auto
MMLU Spanish
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=3850,max_gen_toks=10,tensor_parallel_size=1 \
--tasks mmlu_es_llama_3.1_instruct \
--fewshot_as_multiturn \
--apply_chat_template \
--num_fewshot 5 \
--batch_size auto
MMLU Italian
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=3850,max_gen_toks=10,tensor_parallel_size=1 \
--tasks mmlu_it_llama_3.1_instruct \
--fewshot_as_multiturn \
--apply_chat_template \
--num_fewshot 5 \
--batch_size auto
MMLU German
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=3850,max_gen_toks=10,tensor_parallel_size=1 \
--tasks mmlu_de_llama_3.1_instruct \
--fewshot_as_multiturn \
--apply_chat_template \
--num_fewshot 5 \
--batch_size auto
MMLU French
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=3850,max_gen_toks=10,tensor_parallel_size=1 \
--tasks mmlu_fr_llama_3.1_instruct \
--fewshot_as_multiturn \
--apply_chat_template \
--num_fewshot 5 \
--batch_size auto
MMLU Hindi
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=3850,max_gen_toks=10,tensor_parallel_size=1 \
--tasks mmlu_hi_llama_3.1_instruct \
--fewshot_as_multiturn \
--apply_chat_template \
--num_fewshot 5 \
--batch_size auto
MMLU Thai
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8",dtype=auto,max_model_len=3850,max_gen_toks=10,tensor_parallel_size=1 \
--tasks mmlu_th_llama_3.1_instruct \
--fewshot_as_multiturn \
--apply_chat_template \
--num_fewshot 5 \
--batch_size auto
HumanEval and HumanEval+
Generation
python3 codegen/generate.py \
--model neuralmagic-ent/Llama-3.3-70B-Instruct-quantized.w8a8 \
--bs 16 \
--temperature 0.2 \
--n_samples 50 \
--root "." \
--dataset humaneval
Sanitization
python3 evalplus/sanitize.py \
humaneval/neuralmagic-ent--Llama-3.3-70B-Instruct-quantized.w8a8_vllm_temp_0.2
Evaluation
evalplus.evaluate \
--dataset humaneval \
--samples humaneval/neuralmagic-ent--Llama-3.3-70B-Instruct-quantized.w8a8_vllm_temp_0.2-sanitized
đ§ Technical Details
Model Quantization
The model quantization process involves converting the weights and activations of the original model to INT8 data type. This is done for the linear operators within the transformers blocks. The weight quantization uses a symmetric static per - channel scheme, and the activation quantization uses a symmetric dynamic per - token scheme. These schemes ensure efficient computation while maintaining the model's performance.
Evaluation Benchmarks
The model was evaluated on well - known benchmarks such as OpenLLM v1, OpenLLM v2, HumanEval, and HumanEval+. The evaluations were conducted using specific forks of relevant libraries, like Neural Magic's fork of lm - evaluation - harness for OpenLLM evaluations and the fork of EvalPlus for HumanEval and HumanEval+ evaluations.
đ License
The model is under the llama3.3 license.

