Model Overview
Model Features
Model Capabilities
Use Cases
🚀 Function Calling Fine-tuned Llama 3 Instruct
This model is fine-tuned for function calling and is suitable for commercial use.
⚠️ Important Note
Update July 23rd 2024: The base instruct model performs better than this model when using zero shot prompting. See here for the video tutorial.
This model is fine-tuned for function calling.
- The model is suitable for commercial use and is licensed with the Llama 3 Community license.
Check out other fine-tuned function calling models here.
🚀 Quick Start
Quick Server Setup
Runpod one click TGI template here.
- See this YouTube Video for guidance on inference with this model.
Runpod Affiliate Link (helps support the Trelis channel).
Inference Scripts
See below for sample prompt format.
Complete inference scripts are available for purchase here:
- Support for TGI, vLLM and Llama.cpp
- Automate catching, handling and chaining of function calls.
💻 Usage Examples
Basic Usage
Prompt Format - Using tokenizer.apply_chat_template
For an easier application of the prompt, you can set up as follows (note that the conversation below is complete, i.e. you need to remove assistant messages if you want to feed in the conversation to the model):
Set up messages
:
[
{
"role": "function_metadata",
"content": "FUNCTION_METADATA"
},
{
"role": "user",
"content": "What is the current weather in London?"
},
{
"role": "function_call",
"content": "{\n \"name\": \"get_current_weather\",\n \"arguments\": {\n \"city\": \"London\"\n }\n}"
},
{
"role": "function_response",
"content": "{\n \"temperature\": \"15 C\",\n \"condition\": \"Cloudy\"\n}"
},
{
"role": "assistant",
"content": "The current weather in London is Cloudy with a temperature of 15 Celsius"
}
]
with FUNCTION_METADATA
as:
[
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "This function gets the current weather in a given city",
"parameters": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "The city, e.g., San Francisco"
},
"format": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The temperature unit to use."
}
},
"required": ["city"]
}
}
},
{
"type": "function",
"function": {
"name": "get_clothes",
"description": "This function provides a suggestion of clothes to wear based on the current weather",
"parameters": {
"type": "object",
"properties": {
"temperature": {
"type": "string",
"description": "The temperature, e.g., 15 C or 59 F"
},
"condition": {
"type": "string",
"description": "The weather condition, e.g., 'Cloudy', 'Sunny', 'Rainy'"
}
},
"required": ["temperature", "condition"]
}
}
}
]
and then apply the chat template to get a formatted prompt:
tokenizer = AutoTokenizer.from_pretrained('Trelis/Meta-Llama-3-8B-Instruct-function-calling', trust_remote_code=True)
prompt = tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True)
If you are using a gated model, you need to first run:
pip install huggingface_hub
huggingface-cli login
Manual Prompt:
<|begin_of_text|><|start_header_id|>function_metadata<|end_header_id|>
[
{
"type": "function",
"function": {
"name": "get_stock_price",
"description": "Get the stock price of an array of stocks",
"parameters": {
"type": "object",
"properties": {
"names": {
"type": "array",
"items": {
"type": "string"
},
"description": "An array of stocks"
}
},
"required": [
"names"
]
}
}
},
{
"type": "function",
"function": {
"name": "get_big_stocks",
"description": "Get the names of the largest N stocks by market cap",
"parameters": {
"type": "object",
"properties": {
"number": {
"type": "integer",
"description": "The number of largest stocks to get the names of, e.g. 25"
},
"region": {
"type": "string",
"description": "The region to consider, can be \"US\" or \"World\"."
}
},
"required": [
"number"
]
}
}
}
]<|eot_id|><|start_header_id|>user<|end_header_id|>
Get the names of the five largest stocks by market cap<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Generated Response:
{
"name": "get_big_stocks",
"arguments": {
"number": 5,
"region": "US"
}
}<|eot_id|>
📚 Documentation
Dataset
See Trelis/function_calling_v3.
Model Details
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
Property | Details |
---|---|
Model developers | Meta |
Variations | Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. |
Input | Models input text only. |
Output | Models generate text and code only. |
Model Architecture | Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. |
Training Data | A new mix of publicly available online data. |
Params | 8B and 70B |
Context length | 8k |
GQA | Yes |
Token count | 15T+ |
Knowledge cutoff | March, 2023 (8B); December, 2023 (70B) |
Model Release Date | April 18, 2024. |
Status | This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. |
License | A custom commercial license is available at: https://llama.meta.com/llama3/license |
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go here.
Intended Use
Intended Use Cases Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
Out-of-scope Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English.
💡 Usage Tip
Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
How to use
This repository contains two versions of Meta-Llama-3-8B, for use with transformers and with the original llama3
codebase.
Use with transformers
See the snippet below for usage with Transformers:
>>> import transformers
>>> import torch
>>> model_id = "meta-llama/Meta-Llama-3-8B"
>>> pipeline = transformers.pipeline(
"text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto"
)
>>> pipeline("Hey how are you doing today?")
Use with llama3
Please, follow the instructions in the repository.
To download Original checkpoints, see the example command below leveraging huggingface-cli
:
huggingface-cli download meta-llama/Meta-Llama-3-8B --include "original/*" --local-dir Meta-Llama-3-8B
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
Hardware and Software
Training Factors We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
Carbon Footprint Pretraining utilized a cumulative 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.
Property | Details |
---|---|
Time (GPU hours) - Llama 3 8B | 1.3M |
Time (GPU hours) - Llama 3 70B | 6.4M |
Time (GPU hours) - Total | 7.7M |
Power Consumption (W) - Llama 3 8B | 700 |
Power Consumption (W) - Llama 3 70B | 700 |
Carbon Emitted(tCO2eq) - Llama 3 8B | 390 |
Carbon Emitted(tCO2eq) - Llama 3 70B | 1900 |
Carbon Emitted(tCO2eq) - Total | 2290 |
CO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
Training Data
Overview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
Data Freshness The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
Benchmarks
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.
Category | Benchmark | Llama 3 8B | Llama2 7B | Llama2 13B | Llama 3 70B | Llama2 70B |
---|---|---|---|---|---|---|
General | MMLU (5-shot) | 66.6 | 45.7 | 53.8 | 79.5 | 69.7 |
📄 License
This model is licensed under the Llama 3 Community license and the Apache-2.0 license.

