Model Overview
Model Features
Model Capabilities
Use Cases
🚀 Llama-3 8B Instruct Gradient 4194K (v0.1)
This model extends the context length of Llama-3 8B, demonstrating long - context operation capabilities with minimal training.
Join our custom agent and long context (262k - 1M+) waitlist: https://forms.gle/L6TDY7dozx8TuoUv7
Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message contact@gradient.ai.
For more info see our End - to - end development service for custom LLMs and AI systems
This model extends LLama-3 8B's context length from 8k to 4194K, developed by Gradient, sponsored by compute from Crusoe Energy. It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. For this stage, we trained on 201M tokens, and 1.6B tokens total for all stages, which is ~ 0.01% of Llama-3's original pre - training data.
✨ Features
Approach:
- Use meta-llama/Meta-Llama-3-8B-Instruct as the base.
- Apply NTK - aware interpolation [4] following scaling laws [2] to set optimal schedule for RoPE theta.
- Conduct progressive training on increasing context lengths, similar to Large World Model [2] (See details below).
Infra:
We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on Crusoe Energy high - performance L40S cluster. Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices.
Data:
For training data, we generate long contexts by augmenting SlimPajama. We also fine - tune on a chat dataset based on UltraChat [4], following a similar recipe for data augmentation to [2].
Progressive Training Details:
65K | 262K | 524k | 1048k | 4191k | |
---|---|---|---|---|---|
Initialize From | LLaMA-3 8B | 65K | 262K | 524k | 1048k |
Sequence Length 2^N | 16 | 18 | 19 | 20 | 22 |
RoPE Theta | 15.3 M | 207.1 M | 1.06B | 2.80B | 45.2B |
Batch Size | 1 | 1 | 16 | 8 | 2 |
Gradient Accumulation Steps | 32 | 16 | 1 | 1 | 2 |
Steps | 30 | 24 | 50 | 50 | 12 (stopped early) |
Total Tokens | 62914560 | 100663296 | 419430400 | 838860800 | 201326592 |
Learning Rate | 2.00E-05 | 2.00E-05 | 2.00E-05 | 2.00E-05 | 2.00E-05 |
# GPUs | 8 | 32 | 512 | 512 | 512 |
GPU Type | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S |
Minutes to Train (Wall) | 202 | 555 | 61 | 87 | 433 |
Evaluation Details:
EVAL_MAX_CONTEXT_LENGTH=4194200
EVAL_MIN_CONTEXT_LENGTH=100
EVAL_CONTEXT_INTERVAL=260000
EVAL_DEPTH_INTERVAL=0.2
EVAL_RND_NUMBER_DIGITS=8
The haystack used is haystack #3, as detailed in this blog post.
Quants:
There are no currently quants released. We advise to run the KV Cache in fp16 precision for higher accuracy.
📚 Documentation
The Gradient AI Team
https://gradient.ai/
Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business.
Contact Us
Drop an email to contact@gradient.ai
Citation
@misc{gradientlongcontextllama3,
title={Llama 3 Gradient: A series of long context models},
author={Leonid Pekelis and Michael Feil and Forrest Moret and Mark Huang and Tiffany Peng},
year={2024},
url = {https://gradient.ai/blog/scaling-rotational-embeddings-for-long-context-language-models}
}
References
[1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023).
[2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024).
[3] https://github.com/jzhang38/EasyContext
[4] Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan Liu, Maosong Sun, and Bowen Zhou. Enhancing chat language models by scaling high - quality instructional conversations. arXiv preprint arXiv:2305.14233, 2023.
🔧 Technical Details
Base Model
Model Details
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
Model developers: Meta
Variations: Llama 3 comes in two sizes — 8B and 70B parameters — in pre - trained and instruction tuned variants.
Input: Models input text only.
Output: Models generate text and code only.
Model Architecture: Llama 3 is an auto - regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine - tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
Property | Details |
---|---|
Model Type | Llama 3 family of large language models |
Training Data | A new mix of publicly available online data. |
Params | 8B and 70B |
Context length | 8k |
GQA | Yes |
Token count | 15T+ |
Knowledge cutoff | March, 2023 (8B); December, 2023 (70B) |
Llama 3 family of models: Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped - Query Attention (GQA) for improved inference scalability.
Model Release Date: April 18, 2024.
Status: This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
License: A custom commercial license is available at: https://llama.meta.com/llama3/license
Where to send questions or comments about the model: Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta - llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta - llama/llama - recipes).
Intended Use
Intended Use Cases: Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant - like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
Out - of - scope: Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English.
⚠️ Important Note
Developers may fine - tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
How to use
This repository contains two versions of Meta - Llama - 3 - 8B - Instruct, for use with transformers and with the original llama3
codebase.
💻 Usage Examples
Use with transformers
You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the generate()
function. Let's see examples of both.
Basic Usage
import transformers
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
Advanced Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
Use with llama3
Please, follow the instructions in the [repository](https://github.com/meta - llama/llama3)
To download Original checkpoints, see the example command below leveraging huggingface - cli
:
huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
Hardware and Software
Training Factors: We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine - tuning, annotation, and evaluation were also performed on third - party cloud compute.
Carbon Footprint: Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100 - 80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.
Property | Details |
---|---|
Time (GPU hours) - Llama 3 8B | 1.3M |
Power Consumption (W) - Llama 3 8B | 700 |
Carbon Emitted(tCO2eq) - Llama 3 8B | 390 |
Time (GPU hours) - Llama 3 70B | 6.4M |
Power Consumption (W) - Llama 3 70B | 700 |
Carbon Emitted(tCO2eq) - Llama 3 70B | 1900 |
Time (GPU hours) - Total | 7.7M |
Carbon Emitted(tCO2eq) - Total | 2290 |
CO2 emissions during pre - training: Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
Training Data
Overview: Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine - tuning data includes publicly available instruction datasets, as well as over 10M human - annotated examples. Neither the pretraining nor the fine - tuning datasets include Meta user data.
Data Freshness: The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
Benchmarks
In this section, we report the results for Llam
📄 License
The license for this model is llama3. A custom commercial license for the base model is available at: https://llama.meta.com/llama3/license

