Model Overview
Model Features
Model Capabilities
Use Cases
🚀 Linearized Experts
This is a 4-bit Quantized version of this model with the experts broken up and linearized so they play nicely with PEFT/LoRA. To use this with Axolotl, simply include this in your YAML:
llama4_linearized_experts: true
📚 Model Information
The Llama 4 collection of models are natively multimodal AI models that enable text and multimodal experiences. These models leverage a mixture-of-experts architecture to offer industry-leading performance in text and image understanding.
These Llama 4 models mark the beginning of a new era for the Llama ecosystem. We are launching two efficient models in the Llama 4 series, Llama 4 Scout, a 17 billion parameter model with 16 experts, and Llama 4 Maverick, a 17 billion parameter model with 128 experts.
Model developer: Meta
Model Architecture: The Llama 4 models are auto-regressive language models that use a mixture-of-experts (MoE) architecture and incorporate early fusion for native multimodality.
Property | Details |
---|---|
Model Type | The Llama 4 collection consists of natively multimodal AI models leveraging a mixture-of-experts architecture for text and image understanding. There are two models in the series: Llama 4 Scout (17 billion parameters, 16 experts) and Llama 4 Maverick (17 billion parameters, 128 experts). |
Training Data | A mix of publicly available, licensed data and information from Meta's products and services. This includes publicly shared posts from Instagram and Facebook and people's interactions with Meta AI. Learn more in our Privacy Center. |
Params (Llama 4 Scout) | 17B (Activated), 109B (Total) |
Params (Llama 4 Maverick) | 17B (Activated), 400B (Total) |
Input modalities | Multilingual text and image |
Output modalities | Multilingual text and code |
Context length (Llama 4 Scout) | 10M |
Context length (Llama 4 Maverick) | 1M |
Token count (Llama 4 Scout) | ~40T |
Token count (Llama 4 Maverick) | ~22T |
Knowledge cutoff | August 2024 |
Supported languages: Arabic, English, French, German, Hindi, Indonesian, Italian, Portuguese, Spanish, Tagalog, Thai, and Vietnamese.
Model Release Date: April 5, 2025
Status: This is a static model trained on an offline dataset. Future versions of the tuned models may be released as we improve model behavior with community feedback.
License: A custom commercial license, the Llama 4 Community License Agreement, is available at: https://github.com/meta-llama/llama-models/blob/main/models/llama4/LICENSE
Where to send questions or comments about the model: Instructions on how to provide feedback or comments on the model can be found in the Llama README. For more technical information about generation parameters and recipes for how to use Llama 4 in applications, please go here.
🎯 Intended Use
Intended Use Cases: Llama 4 is intended for commercial and research use in multiple languages. Instruction tuned models are intended for assistant-like chat and visual reasoning tasks, whereas pretrained models can be adapted for natural language generation. For vision, Llama 4 models are also optimized for visual recognition, image reasoning, captioning, and answering general questions about an image. The Llama 4 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 4 Community License allows for these use cases.
Out-of-scope: Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 4 Community License. Use in languages or capabilities beyond those explicitly referenced as supported in this model card.
⚠️ Important Note
- Llama 4 has been trained on a broader collection of languages than the 12 supported languages (pre-training includes 200 total languages). Developers may fine-tune Llama 4 models for languages beyond the 12 supported languages provided they comply with the Llama 4 Community License and the Acceptable Use Policy. Developers are responsible for ensuring that their use of Llama 4 in additional languages is done in a safe and responsible manner.
- Llama 4 has been tested for image understanding up to 5 input images. If leveraging additional image understanding capabilities beyond this, Developers are responsible for ensuring that their deployments are mitigated for risks and should perform additional testing and tuning tailored to their specific applications.
💻 Usage Examples
Basic Usage
# Make sure you have transformers `v4.51.0` installed, or upgrade using `pip install -U transformers`.
from transformers import pipeline
import torch
model_id = "meta-llama/Llama-4-Scout-17B-16E"
pipe = pipeline(
"text-generation",
model=model_id,
device_map="auto",
torch_dtype=torch.bfloat16,
)
output = pipe("Roses are red,", max_new_tokens=200)
🔧 Technical Details
Hardware and Software
Training Factors: We used custom training libraries, Meta's custom built GPU clusters, and production infrastructure for pretraining. Fine-tuning, quantization, annotation, and evaluation were also performed on production infrastructure.
Training Energy Use: Model pre-training utilized a cumulative of 7.38M GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
Model Name | Training Time (GPU hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) |
---|---|---|---|---|
Llama 4 Scout | 5.0M | 700 | 1,354 | 0 |
Llama 4 Maverick | 2.38M | 700 | 645 | 0 |
Total | 7.38M | - | 1,999 | 0 |
The methodology used to determine training energy use and greenhouse gas emissions can be found here. Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
Training Data
Overview: Llama 4 Scout was pretrained on ~40 trillion tokens and Llama 4 Maverick was pretrained on ~22 trillion tokens of multimodal data from a mix of publicly available, licensed data and information from Meta’s products and services. This includes publicly shared posts from Instagram and Facebook and people’s interactions with Meta AI.
Data Freshness: The pretraining data has a cutoff of August 2024.
📊 Benchmarks
In this section, we report the results for Llama 4 relative to our previous models. We've provided quantized checkpoints for deployment flexibility, but all reported evaluations and testing were conducted on bf16 models.
Pre-trained models
Pre-trained models | |||||||
---|---|---|---|---|---|---|---|
Category | Benchmark | # Shots | Metric | Llama 3.1 70B | Llama 3.1 405B | Llama 4 Scout | Llama 4 Maverick |
Reasoning & Knowledge | MMLU | 5 | macro_avg/acc_char | 79.3 | 85.2 | 79.6 | 85.5 |
MMLU-Pro | 5 | macro_avg/em | 53.8 | 61.6 | 58.2 | 62.9 | |
MATH | 4 | em_maj1@1 | 41.6 | 53.5 | 50.3 | 61.2 | |
Code | MBPP | 3 | pass@1 | 66.4 | 74.4 | 67.8 | 77.6 |
Multilingual | TydiQA | 1 | average/f1 | 29.9 | 34.3 | 31.5 | 31.7 |
Image | ChartQA | 0 | relaxed_accuracy | No multimodal support | 83.4 | 85.3 | |
DocVQA | 0 | anls | 89.4 | 91.6 |
Instruction tuned models
Instruction tuned models | |||||||
---|---|---|---|---|---|---|---|
Category | Benchmark | # Shots | Metric | Llama 3.3 70B | Llama 3.1 405B | Llama 4 Scout | Llama 4 Maverick |
Image Reasoning | MMMU | 0 | accuracy | No multimodal support | 69.4 | 73.4 | |
MMMU Pro^ | 0 | accuracy | 52.2 | 59.6 | |||
MathVista | 0 | accuracy | 70.7 | 73.7 | |||
Image Understanding | ChartQA | 0 | relaxed_accuracy | 88.8 | 90.0 | ||
DocVQA (test) | 0 | anls | 94.4 | 94.4 | |||
Coding | LiveCodeBench (10/01/2024-02/01/2025) | 0 | pass@1 | 33.3 | 27.7 | 32.8 | 43.4 |
Reasoning & Knowledge | MMLU Pro | 0 | macro_avg/acc | 68.9 | 73.4 | 74.3 | 80.5 |
GPQA Diamond | 0 | accuracy | 50.5 | 49.0 | 57.2 | 69.8 | |
Multilingual | MGSM | 0 | average/em | 91.1 | 91.6 | 90.6 | 92.3 |
Long context | MTOB (half book) eng->kgv/kgv->eng | - | chrF | Context window is 128K | 42.2/36.6 | 54.0/46.4 | |
MTOB (full book) eng->kgv/kgv->eng | - | chrF | 39.7/36.3 | 50.8/46.7 |
^reported numbers for MMMU Pro is the average of Standard and Vision tasks
🔍 Quantization
The Llama 4 Scout model is released as BF16 weights, but can fit within a single H100 GPU with on-the-fly int4 quantization; the Llama 4 Maverick model is released as both BF16 and FP8 quantized weights. The FP8 quantized weights fit on a single H100 DGX host while still maintaining quality. We provide code for on-the-fly int4 quantization which minimizes performance degradation as well.
🛡️ Safeguards
As part of our release approach, we followed a three-pronged strategy to manage risks:
- Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama.
- Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm.
- Provide protections for the community to help prevent the misuse of our models.
Llama is a foundational technology designed for use in a variety of use cases; examples on how Meta’s Llama models have been deployed can be found in our Communi
📄 License
A custom commercial license, the Llama 4 Community License Agreement, is available at: https://github.com/meta-llama/llama-models/blob/main/models/llama4/LICENSE







