Model Overview
Model Features
Model Capabilities
Use Cases
🚀 Zephyr 7B Gemma
Zephyr 7B Gemma is a fine - tuned language model, trained to serve as a helpful assistant, offering high - quality text generation capabilities.
🚀 Quick Start
The model can be used for chat. Here's how you can run the model using the pipeline()
function from 🤗 Transformers:
# pip install transformers>=4.38.2
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="HuggingFaceH4/zephyr-7b-gemma-v0.1",
device_map="auto",
torch_dtype=torch.bfloat16,
)
messages = [
{
"role": "system",
"content": "", # Model not yet trained for follow this
},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
outputs = pipe(
messages,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_k=50,
top_p=0.95,
stop_sequence="<|im_end|>",
)
print(outputs[0]["generated_text"][-1]["content"])
# It is not possible for a human to eat a helicopter in one sitting, as a
# helicopter is a large and inedible machine. Helicopters are made of metal,
# plastic, and other materials that are not meant to be consumed by humans.
# Eating a helicopter would be extremely dangerous and would likely cause
# serious health problems, including choking, suffocation, and poisoning. It is
# important to only eat food that is safe and intended for human consumption.
✨ Features
Zephyr is a series of language models trained to act as helpful assistants. Zephyr 7B Gemma is fine - tuned on a mix of publicly available, synthetic datasets using Direct Preference Optimization (DPO), which can provide better text generation performance.
📚 Documentation
Model description
Property | Details |
---|---|
Model Type | A 7B parameter GPT - like model fine - tuned on a mix of publicly available, synthetic datasets. |
Language(s) (NLP) | Primarily English |
License | Gemma Terms of Use |
Finetuned from model | [google/gemma - 7b](https://huggingface.co/google/gemma - 7b) |
Model Sources
- Repository: https://github.com/huggingface/alignment - handbook
- Demo: https://huggingface.co/spaces/HuggingFaceH4/zephyr - 7b - gemma - chat
Performance
Model | MT Bench⬇️ | IFEval |
---|---|---|
[zephyr - 7b - gemma - v0.1](https://huggingface.co/HuggingFaceH4/zephyr - 7b - gemma - v0.1) | 7.81 | 28.76 |
[zephyr - 7b - beta](https://huggingface.co/HuggingFaceH4/zephyr - 7b - beta) | 7.34 | 43.81 |
[google/gemma - 7b - it](https://huggingface.co/google/gemma - 7b - it) | 6.38 | 38.01 |
Model | AGIEval | GPT4All | TruthfulQA | BigBench | Average ⬇️ |
---|---|---|---|---|---|
[zephyr - 7b - beta](https://huggingface.co/HuggingFaceH4/zephyr - 7b - beta) | 37.52 | 71.77 | 55.26 | 39.77 | 51.08 |
[zephyr - 7b - gemma - v0.1](https://huggingface.co/HuggingFaceH4/zephyr - 7b - gemma - v0.1) | 34.22 | 66.37 | 52.19 | 37.10 | 47.47 |
[mlabonne/Gemmalpaca - 7B](https://huggingface.co/mlabonne/Gemmalpaca - 7B) | 21.6 | 40.87 | 44.85 | 30.49 | 34.45 |
[google/gemma - 7b - it](https://huggingface.co/google/gemma - 7b - it) | 21.33 | 40.84 | 41.70 | 30.25 | 33.53 |
Details of AGIEval, GPT4All, TruthfulQA, BigBench
AGIEval
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
agieval_aqua_rat | 0 | acc | 21.65 | ± | 2.59 |
acc_norm | 25.20 | ± | 2.73 | ||
agieval_logiqa_en | 0 | acc | 34.72 | ± | 1.87 |
acc_norm | 35.94 | ± | 1.88 | ||
agieval_lsat_ar | 0 | acc | 19.57 | ± | 2.62 |
acc_norm | 21.74 | ± | 2.73 | ||
agieval_lsat_lr | 0 | acc | 30.59 | ± | 2.04 |
acc_norm | 32.55 | ± | 2.08 | ||
agieval_lsat_rc | 0 | acc | 49.07 | ± | 3.05 |
acc_norm | 42.75 | ± | 3.02 | ||
agieval_sat_en | 0 | acc | 54.85 | ± | 3.48 |
acc_norm | 53.40 | ± | 3.48 | ||
agieval_sat_en_without_passage | 0 | acc | 37.38 | ± | 3.38 |
acc_norm | 33.98 | ± | 3.31 | ||
agieval_sat_math | 0 | acc | 30.91 | ± | 3.12 |
acc_norm | 28.18 | ± | 3.04 |
Average: 34.22%
GPT4All
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
arc_challenge | 0 | acc | 49.15 | ± | 1.46 |
acc_norm | 52.47 | ± | 1.46 | ||
arc_easy | 0 | acc | 77.44 | ± | 0.86 |
acc_norm | 74.75 | ± | 0.89 | ||
boolq | 1 | acc | 79.69 | ± | 0.70 |
hellaswag | 0 | acc | 60.59 | ± | 0.49 |
acc_norm | 78.00 | ± | 0.41 | ||
openbookqa | 0 | acc | 29.20 | ± | 2.04 |
acc_norm | 37.80 | ± | 2.17 | ||
piqa | 0 | acc | 76.82 | ± | 0.98 |
acc_norm | 77.80 | ± | 0.97 | ||
winogrande | 0 | acc | 64.09 | ± | 1.35 |
Average: 66.37%
TruthfulQA
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
truthfulqa_mc | 1 | mc1 | 35.74 | ± | 1.68 |
mc2 | 52.19 | ± | 1.59 |
Average: 52.19%
Bigbench
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
bigbench_causal_judgement | 0 | multiple_choice_grade | 53.68 | ± | 3.63 |
bigbench_date_understanding | 0 | multiple_choice_grade | 59.89 | ± | 2.55 |
bigbench_disambiguation_qa | 0 | multiple_choice_grade | 30.23 | ± | 2.86 |
bigbench_geometric_shapes | 0 | multiple_choice_grade | 11.42 | ± | 1.68 |
exact_str_match | 0.00 | ± | 0.00 | ||
bigbench_logical_deduction_five_objects | 0 | multiple_choice_grade | 28.40 | ± | 2.02 |
bigbench_logical_deduction_seven_objects | 0 | multiple_choice_grade | 19.14 | ± | 1.49 |
bigbench_logical_deduction_three_objects | 0 | multiple_choice_grade | 44.67 | ± | 2.88 |
bigbench_movie_recommendation | 0 | multiple_choice_grade | 26.80 | ± | 1.98 |
bigbench_navigate | 0 | multiple_choice_grade | 50.00 | ± | 1.58 |
bigbench_reasoning_about_colored_objects | 0 | multiple_choice_grade | 52.75 | ± | 1.12 |
bigbench_ruin_names | 0 | multiple_choice_grade | 33.04 | ± | 2.22 |
bigbench_salient_translation_error_detection | 0 | multiple_choice_grade | 33.37 | ± | 1.49 |
bigbench_snarks | 0 | multiple_choice_grade | 48.62 | ± | 3.73 |
bigbench_sports_understanding | 0 | multiple_choice_grade | 58.11 | ± | 1.57 |
bigbench_temporal_sequences | 0 | multiple_choice_grade | 37.20 | ± | 1.53 |
bigbench_tracking_shuffled_objects_five_objects | 0 | multiple_choice_grade | 20.08 | ± | 1.13 |
bigbench_tracking_shuffled_objects_seven_objects | 0 | multiple_choice_grade | 15.77 | ± | 0.87 |
bigbench_tracking_shuffled_objects_three_objects | 0 | multiple_choice_grade | 44.67 | ± | 2.88 |
Average: 37.1%
Intended uses & limitations
The model was initially fine - tuned on the [DEITA 10K](https://huggingface.co/datasets/HuggingFaceH4/deita - 10k - v0 - sft) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT. Then it was further aligned with 🤗 TRL's DPOTrainer
on the [argilla/dpo - mix - 7k](https://huggingface.co/datasets/argilla/dpo - mix - 7k) dataset, which contains 7k prompts and model completions ranked by GPT - 4. As a result, the model can be used for chat and you can check out our [demo](https://huggingface.co/spaces/HuggingFaceH4/zephyr - chat) to test its capabilities.
Bias, Risks, and Limitations
Zephyr 7B Gemma has not been aligned to human preferences for safety within the RLHF phase or deployed with in - the - loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model (google/gemma - 7b
), however it is likely to have included a mix of Web data and technical sources like books and code. See the [StarCoder2 model card](https://huggingface.co/bigcode/starcoder2 - 15b) for an example of this.
Training and evaluation data
This model is a fine - tuned version of [HuggingFaceH4/zephyr - 7b - gemma - sft - v0.1](https://huggingface.co/HuggingFaceH4/zephyr - 7b - gemma - sft - v0.1) on the argilla/dpo - mix - 7k dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4695
- Rewards/chosen: - 3.3746
- Rewards/rejected: - 4.9715
📄 License
The model is under the Gemma Terms of Use.

