đ MaziyarPanahi/calme-2.3-llama3-70b
This model is a fine - tuned (DPO) version of the meta - llama/Meta - Llama - 3 - 70B - Instruct
model, designed for text generation tasks.
đ Quick Start
You can use this model by specifying MaziyarPanahi/calme-2.3-llama3-70b
as the model name in Hugging Face's transformers
library. Here is a Python code example:
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
from transformers import pipeline
import torch
model_id = "MaziyarPanahi/calme-2.3-llama3-70b"
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(
model_id,
trust_remote_code=True
)
streamer = TextStreamer(tokenizer)
pipeline = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
model_kwargs={"torch_dtype": torch.bfloat16},
streamer=streamer
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|im_end|>"),
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=2048,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.95,
)
print(outputs[0]["generated_text"][len(prompt):])
⨠Features
- Fine - Tuned Model: It is a fine - tuned (DPO) version of the
meta - llama/Meta - Llama - 3 - 70B - Instruct
model.
- Quantized GGUF Available: All GGUF models are accessible at MaziyarPanahi/calme-2.3-llama3-70b-GGUF.
- ChatML Prompt Template: Utilizes the
ChatML
prompt template for text generation.
đ Documentation
Detailed results can be found here
Metric |
Value |
Avg. |
78.74 |
AI2 Reasoning Challenge (25 - Shot) |
72.35 |
HellaSwag (10 - Shot) |
86.00 |
MMLU (5 - Shot) |
80.47 |
TruthfulQA (0 - shot) |
63.45 |
Winogrande (5 - shot) |
82.95 |
GSM8k (5 - shot) |
87.19 |
Top 10 models on the Leaderboard

Prompt Template
This model uses the ChatML
prompt template:
<|im_start|>system
{System}
<|im_end|>
<|im_start|>user
{User}
<|im_end|>
<|im_start|>assistant
{Assistant}
đ License
The model is under the llama3
license. You can find more details at LICENSE.
Model Information
Property |
Details |
Model Type |
Fine - tuned (DPO) of meta - llama/Meta - Llama - 3 - 70B - Instruct |
Training Data |
MaziyarPanahi/truthy - dpo - v0.1 - axolotl |
Model Creator |
MaziyarPanahi |
Quantized By |
MaziyarPanahi |
License Name |
llama3 |
License Link |
LICENSE |
Pipeline Tag |
text - generation |
Base Model |
meta - llama/Meta - Llama - 3 - 70B - Instruct |
Model Name |
calme - 2.3 - llama3 - 70b |