Model Overview
Model Features
Model Capabilities
Use Cases
🚀 Llama 4 Model
Llama 4 is a collection of natively multimodal AI models. It supports multiple languages and can handle various tasks such as text generation, visual recognition, and image reasoning, offering high - performance solutions for commercial and research purposes.
🚀 Quick Start
Please, make sure you have transformers
v4.51.0
installed, or upgrade using pip install -U transformers
.
from transformers import AutoProcessor, Llama4ForConditionalGeneration
import torch
model_id = "meta-llama/Llama-4-Maverick-17B-128E-Instruct"
processor = AutoProcessor.from_pretrained(model_id)
model = Llama4ForConditionalGeneration.from_pretrained(
model_id,
attn_implementation="flex_attention",
device_map="auto",
torch_dtype=torch.bfloat16,
)
url1 = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg"
url2 = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/cat_style_layout.png"
messages = [
{
"role": "user",
"content": [
{"type": "image", "url": url1},
{"type": "image", "url": url2},
{"type": "text", "text": "Can you describe how these two images are similar, and how they differ?"},
]
},
]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=256,
)
response = processor.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])[0]
print(response)
print(outputs[0])
✨ Features
- Multimodal Capabilities: Llama 4 is a natively multimodal AI model that can handle both text and image data, enabling tasks such as visual recognition, image reasoning, captioning, and answering questions about images.
- Multilingual Support: It supports multiple languages, including Arabic, English, French, German, Hindi, Indonesian, Italian, Portuguese, Spanish, Tagalog, Thai, and Vietnamese, making it suitable for a wide range of global applications.
- High - Performance: The models in the Llama 4 collection show significant improvements in various benchmarks compared to previous models, such as better performance in reasoning, coding, and multilingual tasks.
- Flexible Use Cases: It can be used for commercial and research purposes, including assistant - like chat, natural language generation, and improving other models through synthetic data generation and distillation.
📚 Documentation
Model Information
- Model Developer: Meta
- Model Architecture: The Llama 4 models are auto - regressive language models that use a mixture - of - experts (MoE) architecture and incorporate early fusion for native multimodality. | Property | Details | |----------|---------| | Model Type | Natively multimodal AI models | | Training Data | A mix of publicly available, licensed data and information from Meta’s products and services, including publicly shared posts from Instagram and Facebook and people’s interactions with Meta AI. The pretraining data has a cutoff of August 2024. | | Supported Languages | Arabic, English, French, German, Hindi, Indonesian, Italian, Portuguese, Spanish, Tagalog, Thai, and Vietnamese | | Model Release Date | April 5, 2025 | | Status | A static model trained on an offline dataset. Future versions of the tuned models may be released. | | License | A custom commercial license, the Llama 4 Community License Agreement, available at: [https://github.com/meta - llama/llama - models/blob/main/models/llama4/LICENSE](https://github.com/meta - llama/llama - models/blob/main/models/llama4/LICENSE) | | Where to Send Questions | Instructions on how to provide feedback or comments on the model can be found in the Llama [README](https://github.com/meta - llama/llama - models/blob/main/README.md). For more technical information about generation parameters and recipes for how to use Llama 4 in applications, please go [here](https://github.com/meta - llama/llama - cookbook). |
Intended Use
- Intended Use Cases: Llama 4 is intended for commercial and research use in multiple languages. Instruction tuned models are for assistant - like chat and visual reasoning tasks, while pretrained models can be adapted for natural language generation. For vision, it is optimized for visual recognition, image reasoning, captioning, and answering general questions about an image. It also supports leveraging its outputs to improve other models.
- Out - of - scope: Use that violates applicable laws or regulations (including trade compliance laws), is prohibited by the Acceptable Use Policy and Llama 4 Community License, or is beyond the supported languages and capabilities mentioned in this model card.
Hardware and Software
- Training Factors: Custom training libraries, Meta's custom - built GPU clusters, and production infrastructure were used for pretraining. Fine - tuning, quantization, annotation, and evaluation were also performed on production infrastructure.
- Training Energy Use: Model pre - training utilized a cumulative of 7.38M GPU hours of computation on H100 - 80GB (TDP of 700W) type hardware.
- Training Greenhouse Gas Emissions: Estimated total location - based greenhouse gas emissions were 1,999 tons CO2eq for training. The total market - based greenhouse gas emissions for training were 0 tons CO2eq. | Model Name | Training Time (GPU hours) | Training Power Consumption (W) | Training Location - Based Greenhouse Gas Emissions (tons CO2eq) | Training Market - Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | :---: | :---: | :---: | | Llama 4 Scout | 5.0M | 700 | 1,354 | 0 | | Llama 4 Maverick | 2.38M | 700 | 645 | 0 | | Total | 7.38M | - | 1,999 | 0 |
Training Data
- Overview: Llama 4 Scout was pretrained on ~40 trillion tokens and Llama 4 Maverick was pretrained on ~22 trillion tokens of multimodal data from a mix of publicly available, licensed data and information from Meta’s products and services.
- Data Freshness: The pretraining data has a cutoff of August 2024.
Benchmarks
Pre - trained models
Category | Benchmark | # Shots | Metric | Llama 3.1 70B | Llama 3.1 405B | Llama 4 Scout | Llama 4 Maverick |
---|---|---|---|---|---|---|---|
Reasoning & Knowledge | MMLU | 5 | macro_avg/acc_char | 79.3 | 85.2 | 79.6 | 85.5 |
MMLU - Pro | 5 | macro_avg/em | 53.8 | 61.6 | 58.2 | 62.9 | |
MATH | 4 | em_maj1@1 | 41.6 | 53.5 | 50.3 | 61.2 | |
Code | MBPP | 3 | pass@1 | 66.4 | 74.4 | 67.8 | 77.6 |
Multilingual | TydiQA | 1 | average/f1 | 29.9 | 34.3 | 31.5 | 31.7 |
Image | ChartQA | 0 | relaxed_accuracy | No multimodal support | 83.4 | 85.3 | |
DocVQA | 0 | anls | 89.4 | 91.6 |
Instruction tuned models
Category | Benchmark | # Shots | Metric | Llama 3.3 70B | Llama 3.1 405B | Llama 4 Scout | Llama 4 Maverick |
---|---|---|---|---|---|---|---|
Image Reasoning | MMMU | 0 | accuracy | No multimodal support | 69.4 | 73.4 | |
MMMU Pro^ | 0 | accuracy | 52.2 | 59.6 | |||
MathVista | 0 | accuracy | 70.7 | 73.7 | |||
Image Understanding | ChartQA | 0 | relaxed_accuracy | 88.8 | 90.0 | ||
DocVQA (test) | 0 | anls | 94.4 | 94.4 | |||
Coding | LiveCodeBench (10/01/2024 - 02/01/2025) | 0 | pass@1 | 33.3 | 27.7 | 32.8 | 43.4 |
Reasoning & Knowledge | MMLU Pro | 0 | macro_avg/em | 68.9 | 73.4 | 74.3 | 80.5 |
GPQA Diamond | 0 | accuracy | 50.5 | 49.0 | 57.2 | 69.8 | |
Multilingual | MGSM | 0 | average/em | 91.1 | 91.6 | 90.6 | 92.3 |
Long context | MTOB (half book) eng -> kgv/kgv -> eng | - | chrF | Context window is 128K | 42.2/36.6 | 54.0/46.4 | |
MTOB (full book) | ... | ... | ... | ... | ... | ... |
Limitations and Notes
⚠️ Important Note
- Llama 4 has been trained on a broader collection of languages than the 12 supported languages (pre - training includes [200 total languages](https://ai.meta.com/research/no - language - left - behind/)). Developers may fine - tune Llama 4 models for languages beyond the 12 supported languages provided they comply with the Llama 4 Community License and the Acceptable Use Policy. Developers are responsible for ensuring that their use of Llama 4 in additional languages is done in a safe and responsible manner.
- Llama 4 has been tested for image understanding up to 5 input images. If leveraging additional image understanding capabilities beyond this, Developers are responsible for ensuring that their deployments are mitigated for risks and should perform additional testing and tuning tailored to their specific applications.
- The 4 - bit model currently only works with Unsloth! See [our collection](https://huggingface.co/collections/unsloth/llama - 4 - 67f19503d764b0f3a2a868d2) for versions of Llama 4 including 4 - bit & 16 - bit formats. Unsloth's [Dynamic Quants](https://unsloth.ai/blog/dynamic - 4bit) is selectively quantized, greatly improving accuracy over standard 4 - bit.
License
The Llama 4 models are licensed under the Llama 4 Community License Agreement. The full text of the license can be found at [https://github.com/meta - llama/llama - models/blob/main/models/llama4/LICENSE](https://github.com/meta - llama/llama - models/blob/main/models/llama4/LICENSE).
Additional Information
The information you provide when accessing the model will be collected, stored, processed and shared in accordance with the Meta Privacy Policy. You need to provide your full legal name, date of birth, and full organization name with all corporate identifiers. Avoid the use of acronyms and special characters. Failure to follow these instructions may prevent you from accessing this model and others on Hugging Face. You will not have the ability to edit this form after submission, so please ensure all information is accurate.







