🚀 Llama 3.2 - Multilingual Large Language Models
The Llama 3.2 collection of multilingual large language models (LLMs) offers a range of pretrained and instruction - tuned generative models. These models are designed to handle various multilingual dialogue use cases, providing high - performance solutions for both commercial and research purposes.
🚀 Quick Start
To start using the Llama 3.2 models, you need to accept the [Llama 3.2 Community License](https://github.com/meta - llama/llama - models/blob/main/models/llama3_2/LICENSE). After that, you can refer to the Llama Models [README](https://github.com/meta - llama/llama - models/blob/main/README.md) for instructions on providing feedback and the [Llama Recipes](https://github.com/meta - llama/llama - recipes) for technical details about generation parameters and usage in applications.
✨ Features
- Multilingual Support: Officially supports English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai, with training on a broader language set.
- Optimized for Dialogue: Instruction - tuned text - only models are optimized for multilingual dialogue, including agentic retrieval and summarization tasks.
- High Performance: Outperforms many available open - source and closed chat models on common industry benchmarks.
- Scalable Inference: All model versions use Grouped - Query Attention (GQA) for improved inference scalability.
📦 Installation
No specific installation steps are provided in the original document.
💻 Usage Examples
No code examples are provided in the original document.
📚 Documentation
Model Information
The Llama 3.2 collection consists of pretrained and instruction - tuned generative models in 1B and 3B sizes (text in/text out). The instruction - tuned text - only models are optimized for multilingual dialogue use cases.
Property |
Details |
Model Developer |
Meta |
Model Architecture |
An auto - regressive language model using an optimized transformer architecture. Tuned versions use supervised fine - tuning (SFT) and reinforcement learning with human feedback (RLHF). |
Training Data |
A new mix of publicly available online data |
Params |
1B (1.23B) and 3B (3.21B) |
Input Modalities |
Multilingual Text |
Output Modalities |
Multilingual Text and code |
Context Length |
128k (for non - quantized 1B model), 8k (for quantized 1B model) |
GQA |
Yes |
Shared Embeddings |
Yes |
Token count |
Up to 9T tokens |
Knowledge cutoff |
December 2023 |
Supported Languages |
English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai. Can be fine - tuned for other languages within license terms. |
Model Release Date |
Oct 24, 2024 |
Status |
A static model trained on an offline dataset. Future improvements may be released. |
License |
[Llama 3.2 Community License](https://github.com/meta - llama/llama - models/blob/main/models/llama3_2/LICENSE) |
Intended Use
- Intended Use Cases: Commercial and research use in multiple languages. Instruction - tuned text - only models for assistant - like chat, knowledge retrieval, summarization, etc. Pretrained models can be adapted for various natural language generation tasks, and quantized models for on - device use with limited compute resources.
- Out of Scope: Any use that violates applicable laws, the Acceptable Use Policy, or the Llama 3.2 Community License. Use in unsupported languages.
Hardware and Software
- Training Factors: Custom training libraries, Meta's custom - built GPU cluster, and production infrastructure were used for pretraining. Fine - tuning, quantization, annotation, and evaluation were also done on production infrastructure.
- Training Energy Use: A cumulative of 916k GPU hours of computation on H100 - 80GB (TDP of 700W) type hardware.
- Training Greenhouse Gas Emissions: Estimated total location - based greenhouse gas emissions were 240 tons CO2eq. Market - based emissions were 0 tons CO2eq due to Meta's renewable energy use.
Property |
Details |
Training Time (GPU hours) |
Llama 3.2 1B: 370k |
Logit Generation Time (GPU Hours) |
Llama 3.2 1B: - |
Training Power Consumption (W) |
Llama 3.2 1B: 700 |
Training Location - Based Greenhouse Gas Emissions (tons CO2eq) |
Llama 3.2 1B: [Value not fully provided in original] |
Training Market - Based Greenhouse Gas Emissions (tons CO2eq) |
Llama 3.2 1B: [Value not fully provided in original] |
License Agreement
The use of Llama 3.2 is governed by the Llama 3.2 Community License. The key points of the license are as follows:
1. License Rights and Redistribution
- Grant of Rights: You are granted a non - exclusive, worldwide, non - transferable and royalty - free limited license to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.
- Redistribution and Use:
- If you distribute or make available the Llama Materials or related products, you must provide a copy of the license and prominently display “Built with Llama”. If you use the Llama Materials to improve an AI model, include “Llama” at the beginning of the model name.
- If you receive Llama Materials as part of an integrated end - user product, Section 2 of the agreement may not apply.
- You must retain the attribution notice in all copies of the Llama Materials.
- Your use must comply with applicable laws and the Acceptable Use Policy.
2. Additional Commercial Terms
If the monthly active users of your products or services exceed 700 million, you must request a license from Meta.
3. Disclaimer of Warranty
The Llama Materials are provided “as is” without warranties.
4. Limitation of Liability
Meta is not liable for lost profits or indirect damages.
5. Intellectual Property
- No trademark licenses are granted except for using “Llama” as required.
- You own your derivative works of the Llama Materials.
- If you sue Meta for infringement, your license will terminate.
6. Term and Termination
The agreement starts upon acceptance and continues until termination. Meta can terminate if you breach the agreement. Sections 3, 4, and 7 survive termination.
7. Governing Law and Jurisdiction
The agreement is governed by California law, and disputes are subject to California courts.
Acceptable Use Policy
Meta is committed to promoting safe and fair use of Llama 3.2. Prohibited uses include engaging in illegal activities, promoting violence, deception, and using the model in ways that violate privacy or professional regulations.
You can report violations, bugs, or other problems through the following channels:
- Reporting issues with the model: [https://github.com/meta - llama/llama - models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta - llama%2Fllama - models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)
- Reporting risky content generated by the model: developers.facebook.com/llama_output_feedback
- Reporting bugs and security concerns: facebook.com/whitehat/info
- Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: LlamaUseReport@meta.com
Gated Access Information
To access the model, you need to provide information such as your first name, last name, date of birth, country, affiliation, job title, etc. The information will be collected, stored, processed, and shared in accordance with the Meta Privacy Policy.
🔧 Technical Details
No specific technical details (more than 50 - word technical descriptions) are provided in the original document.
📄 License
The use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta - llama/llama - models/blob/main/models/llama3_2/LICENSE).