đ Transformers Library for Llama 4
This library provides a convenient way to use the Llama 4 models, which are natively multimodal AI models enabling text and multimodal experiences. It offers high - performance in text and image understanding, and supports multiple languages.
đ Quick Start
By using this model you agree to the license agreement of the original Llama 4 by Meta. See the license agreement at https://huggingface.co/meta-llama/Llama-4-Scout-17B-16E-Instruct.
Prerequisites
Make sure you have transformers
v4.51.0
installed, or upgrade using pip install -U transformers
.
Example Code
from transformers import AutoProcessor, Llama4ForConditionalGeneration
import torch
model_id = "meta-llama/Llama-4-Scout-17B-16E-Instruct"
processor = AutoProcessor.from_pretrained(model_id)
model = Llama4ForConditionalGeneration.from_pretrained(
model_id,
attn_implementation="flex_attention",
device_map="auto",
torch_dtype=torch.bfloat16,
)
url1 = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg"
url2 = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/cat_style_layout.png"
messages = [
{
"role": "user",
"content": [
{"type": "image", "url": url1},
{"type": "image", "url": url2},
{"type": "text", "text": "Can you describe how these two images are similar, and how they differ?"},
]
},
]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=256,
)
response = processor.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])[0]
print(response)
print(outputs[0])
⨠Features
- Multimodal Capabilities: The Llama 4 models support both text and image inputs, enabling tasks such as visual recognition, image reasoning, captioning, etc.
- Multilingual Support: It supports multiple languages including Arabic, English, French, German, Hindi, Indonesian, Italian, Portuguese, Spanish, Tagalog, Thai, and Vietnamese.
- High - Performance Architecture: Leverages a mixture - of - experts architecture for industry - leading performance in text and image understanding.
- Flexible Deployment: Quantized checkpoints are available for deployment flexibility.
đĻ Installation
Make sure you have transformers
v4.51.0
installed. You can upgrade it using the following command:
pip install -U transformers
đģ Usage Examples
Basic Usage
from transformers import Llama4ForConditionalGeneration
bias_unlearned_model = Llama4ForConditionalGeneration.from_pretrained(
"hirundo-io/debiased-Llama-4-Scout-17B-16E-Instruct",
device_map="auto",
torch_dtype=torch.bfloat16,
)
đ Documentation
Model Information
Property |
Details |
Model Type |
The Llama 4 models are auto - regressive language models that use a mixture - of - experts (MoE) architecture and incorporate early fusion for native multimodality. |
Training Data |
A mix of publicly available, licensed data and information from Meta's products and services. This includes publicly shared posts from Instagram and Facebook and people's interactions with Meta AI. The pretraining data has a cutoff of August 2024. |
Supported Languages |
Arabic, English, French, German, Hindi, Indonesian, Italian, Portuguese, Spanish, Tagalog, Thai, and Vietnamese. |
Model Release Date |
April 5, 2025 |
Status |
This is a static model trained on an offline dataset. Future versions of the tuned models may be released as we improve model behavior with community feedback. |
License |
A custom commercial license, the Llama 4 Community License Agreement, is available at: https://github.com/meta-llama/llama-models/blob/main/models/llama4/LICENSE |
Model Parameters Table
Model Name |
Training Data |
Params |
Input modalities |
Output modalities |
Context length |
Token count |
Knowledge cutoff |
Llama 4 Scout (17Bx16E) |
A mix of publicly available, licensed data and information from Meta's products and services. This includes publicly shared posts from Instagram and Facebook and people's interactions with Meta AI. Learn more in our Privacy Center. |
17B (Activated) 109B (Total) |
Multilingual text and image |
Multilingual text and code |
10M |
~40T |
August 2024 |
Llama 4 Maverick (17Bx128E) |
|
17B (Activated) 400B (Total) |
Multilingual text and image |
Multilingual text and code |
1M |
~22T |
August 2024 |
Intended Use
- Intended Use Cases: Llama 4 is intended for commercial and research use in multiple languages. Instruction tuned models are for assistant - like chat and visual reasoning tasks, and pretrained models can be adapted for natural language generation. For vision, it is optimized for visual recognition, image reasoning, captioning, and answering general questions about an image. It also supports leveraging model outputs to improve other models.
- Out - of - scope: Use in any manner that violates applicable laws or regulations (including trade compliance laws), or in any way prohibited by the Acceptable Use Policy and Llama 4 Community License. Use in languages or capabilities beyond those explicitly referenced as supported in this model card.
Benchmarks
Pre - trained models
Category |
Benchmark |
# Shots |
Metric |
Llama 3.1 70B |
Llama 3.1 405B |
Llama 4 Scout |
Llama 4 Maverick |
Reasoning & Knowledge |
MMLU |
5 |
macro_avg/acc_char |
79.3 |
85.2 |
79.6 |
85.5 |
|
MMLU - Pro |
5 |
macro_avg/em |
53.8 |
61.6 |
58.2 |
62.9 |
|
MATH |
4 |
em_maj1@1 |
41.6 |
53.5 |
50.3 |
61.2 |
Code |
MBPP |
3 |
pass@1 |
66.4 |
74.4 |
67.8 |
77.6 |
Multilingual |
TydiQA |
1 |
average/f1 |
29.9 |
34.3 |
31.5 |
31.7 |
Image |
ChartQA |
0 |
relaxed_accuracy |
No multimodal support |
|
83.4 |
85.3 |
|
DocVQA |
0 |
anls |
|
|
89.4 |
91.6 |
Instruction tuned models
Category |
Benchmark |
# Shots |
Metric |
Llama 3.3 70B |
Llama 3.1 405B |
Llama 4 Scout |
Llama 4 Maverick |
Image Reasoning |
MMMU |
0 |
accuracy |
No multimodal support |
|
69.4 |
73.4 |
|
MMMU Pro^ |
0 |
accuracy |
|
|
52.2 |
59.6 |
|
MathVista |
0 |
accuracy |
|
|
70.7 |
73.7 |
Image Understanding |
ChartQA |
0 |
relaxed_accuracy |
|
|
88.8 |
90.0 |
|
DocVQA (test) |
0 |
anls |
|
|
94.4 |
94.4 |
Coding |
LiveCodeBench (10/01/2024 - 02/01/2025) |
0 |
pass@1 |
33.3 |
27.7 |
32.8 |
43.4 |
Reasoning & Knowledge |
MMLU Pro |
0 |
macro_avg/acc |
68.9 |
73.4 |
74.3 |
80.5 |
|
GPQA Diamond |
0 |
accuracy |
50.5 |
49.0 |
57.2 |
69.8 |
Multilingual |
MGSM |
0 |
average/em |
91.1 |
91.6 |
90.6 |
92.3 |
Long context |
MTOB (half book) eng->kgv/kgv->eng |
- |
chrF |
Context window is 128K |
|
42.2/36.6 |
54.0/46.4 |
|
MTOB (full book) eng->kgv/kgv->eng |
- |
chrF |
|
|
39.7/36.3 |
50.8/46.7 |
^reported numbers for MMMU Pro is the average of Standard and Vision tasks
Quantization
The Llama 4 Scout model is released as BF16 weights, but can fit within a single H100 GPU with on - the - fly int4 quantization. The Llama 4 Maverick model is released as both BF16 and FP8 quantized weights. The FP8 quantized weights fit on a single H100 DGX host while still maintaining quality.
Safeguards
As part of the release approach, a three - pronged strategy is followed to manage risks:
- Enable developers to deploy helpful, safe and flexible experiences.
- Protect developers against adversarial users.
- Provide protections for the community to prevent model misuse.
Model level fine - tuning
- Fine - tuning data: A multi - faceted approach to data collection, combining human - generated data with synthetic data.
- Refusals: Emphasis on reducing model refusals to benign prompts.
- Tone: Expanded work on refusal tone for a more natural - sounding model.
- System Prompts: Llama 4 is more steerable, and effective system prompts can enhance model performance.
Llama 4 system protections
System protections like Llama Guard, Prompt Guard and Code Shield are provided for deployment with Llama models or other LLMs.
Evaluations
- Common use cases evaluations: Measure safety risks of systems for common applications.
- Capability evaluations: Measure vulnerabilities inherent to specific capabilities.
- Red teaming: Conducted to discover risks via adversarial prompting.
Critical Risks
Additional focus is on areas such as CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive materials) helpfulness.
đ§ Technical Details
Training Factors
Custom training libraries, Meta's custom built GPU clusters, and production infrastructure were used for pretraining. Fine - tuning, quantization, annotation, and evaluation were also performed on production infrastructure.
Training Energy Use
Model pre - training utilized a cumulative of 7.38M GPU hours of computation on H100 - 80GB (TDP of 700W) type hardware.
Training Greenhouse Gas Emissions
Estimated total location - based greenhouse gas emissions were 1,999 tons CO2eq for training. The total market - based greenhouse gas emissions for training were 0 tons CO2eq.
Model Name |
Training Time (GPU hours) |
Training Power Consumption (W) |
Training Location - Based Greenhouse Gas Emissions (tons CO2eq) |
Training Market - Based Greenhouse Gas Emissions (tons CO2eq) |
Llama 4 Scout |
5.0M |
700 |
1,354 |
0 |
Llama 4 Maverick |
2.38M |
700 |
645 |
0 |
Total |
7.38M |
- |
1,999 |
0 |
The methodology used to determine training energy use and greenhouse gas emissions can be found here.
đ License
A custom commercial license, the Llama 4 Community License Agreement, is available at: https://github.com/meta-llama/llama-models/blob/main/models/llama4/LICENSE