Model Overview
Model Features
Model Capabilities
Use Cases
🚀 MaziyarPanahi/Meta-Llama-3-70B-Instruct-GGUF
This repository provides GGUF and quantized models based on the meta-llama/Meta-Llama-3-70B-Instruct model, facilitating efficient text generation.
🚀 Quick Start
Downloading the Model
You can download only the quants you need instead of cloning the entire repository as follows:
huggingface-cli download MaziyarPanahi/Meta-Llama-3-70B-Instruct-GGUF --local-dir . --include '*Q2_K*gguf'
Loading GGUF Models
You MUST
follow the prompt template provided by Llama-3:
./llama.cpp/main -m Meta-Llama-3-70B-Instruct.Q2_K.gguf -r '<|eot_id|>' --in-prefix "\n<|start_header_id|>user<|end_header_id|>\n\n" --in-suffix "<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" -p "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.<|eot_id|>\n<|start_header_id|>user<|end_header_id|>\n\nHi! How are you?<|eot_id|>\n<|start_header_id|>assistant<|end_header_id|>\n\n" -n 1024
✨ Features
- Based on Meta-Llama-3-70B-Instruct: Leveraging the powerful capabilities of the original model.
- Quantized Models: Available in various quantization levels (2-bit, 3-bit, etc.) for different resource requirements.
- GGUF Format: Compatible with the GGUF format for efficient inference.
📦 Installation
The installation mainly involves downloading the necessary model files. As shown in the quick start section, you can use the huggingface-cli
to download specific quantized models.
💻 Usage Examples
Use with transformers
import transformers
import torch
model_id = "meta-llama/Meta-Llama-3-70B-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
Use with llama3
Please follow the instructions in the repository. To download Original checkpoints, see the example command below leveraging huggingface-cli
:
huggingface-cli download meta-llama/Meta-Llama-3-70B-Instruct --include "original/*" --local-dir Meta-Llama-3-70B-Instruct
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
📚 Documentation
Model Details
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, great care was taken to optimize helpfulness and safety.
Property | Details |
---|---|
Model Developers | Meta |
Variations | Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. |
Input | Models input text only. |
Output | Models generate text and code only. |
Model Architecture | Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. |
Training Data | A new mix of publicly available online data. |
Model Release Date | April 18, 2024. |
Status | This is a static model trained on an offline dataset. Future versions of the tuned models will be released as model safety is improved with community feedback. |
License | A custom commercial license is available at: https://llama.meta.com/llama3/license |
Intended Use
- Intended Use Cases: Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
- Out-of-scope: Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English.
⚠️ Important Note
Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
Hardware and Software
- Training Factors: Custom training libraries, Meta's Research SuperCluster, and production clusters were used for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
- Carbon Footprint: Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100 - 80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.
Property | Llama 3 8B | Llama 3 70B | Total |
---|---|---|---|
Time (GPU hours) | 1.3M | 6.4M | 7.7M |
Power Consumption (W) | 700 | 700 | - |
Carbon Emitted(tCO2eq) | 390 | 1900 | 2290 |
Training Data
- Overview: Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
- Data Freshness: The pretraining data has a cutoff of March 2023 for the 8B and December 2023 for the 70B models respectively.
Benchmarks
In this section, the results for Llama 3 models on standard automatic benchmarks are reported. For all the evaluations, an internal evaluations library is used. For details on the methodology see here.
Base pretrained models
Category | Benchmark | Llama 3 8B | Llama2 7B | Llama2 13B | Llama 3 70B | Llama2 70B |
---|---|---|---|---|---|---|
General | MMLU (5 - shot) | 66.6 | 45.7 | 53.8 | 79.5 | 69.7 |
General | AGIEval English (3 - 5 shot) | 45.9 | 28.8 | 38.7 | 63.0 | 54.8 |
General | CommonSenseQA (7 - shot) | 72.6 | 57.6 | 67.6 | 83.8 | 78.7 |
General | Winogrande (5 - shot) | 76.1 | 73.3 | 75.4 | 83.1 | 81.8 |
General | BIG - Bench Hard (3 - shot, CoT) | 61.1 | 38.1 | 47.0 | 81.3 | 65.7 |
General | ARC - Challenge (25 - shot) | 78.6 | 53.7 | 67.6 | 93.0 | 85.3 |
Knowledge reasoning | TriviaQA - Wiki (5 - shot) | 78.5 | 72.1 | 79.6 | 89.7 | 87.5 |
Reading comprehension | SQuAD (1 - shot) | 76.4 | 72.2 | 72.1 | 85.6 | 82.6 |
Reading comprehension | QuAC (1 - shot, F1) | 44.4 | 39.6 | 44.9 | 51.1 | 49.4 |
Reading comprehension | BoolQ (0 - shot) | 75.7 | 65.5 | 66.9 | 79.0 | 73.1 |
Reading comprehension | DROP (3 - shot, F1) | 58.4 | 37.9 | 49.8 | 79.7 | 70.2 |
Instruction tuned models
Benchmark | Llama 3 8B | Llama 2 7B | Llama 2 13B | Llama 3 70B | Llama 2 70B |
---|---|---|---|---|---|
MMLU (5 - shot) | 68.4 | 34.1 | 47.8 | 82.0 | 52.9 |
GPQA (0 - shot) | 34.2 | 21.7 | 22.3 | 39.5 | 21.0 |
HumanEval (0 - shot) | 62.2 | 7.9 | 14.0 | 81.7 | 25.6 |
GSM - 8K (8 - shot, CoT) | 79.6 | 25.7 | 77.4 | 93.0 | 57.5 |
MATH (4 - shot, CoT) | 30.0 | 3.8 | 6.7 | 50.4 | 11.6 |
Responsibility & Safety
It is believed that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. A commitment is made to Responsible AI development and a series of steps were taken to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies built for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases out - of - the - box, as those by their nature will differ across different applications.
Rather, responsible LLM - application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre - training, fine - tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, the Responsible Use Guide was updated to outline the steps and best practices for developers to implement model and system level safety for their application. A set of resources including Meta Llama Guard 2 and Code Shield safeguards are also provided. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness.
📄 License
A custom commercial license is available at: https://llama.meta.com/llama3/license

