Model Overview
Model Features
Model Capabilities
Use Cases
🚀 Zephyr 7B Beta - AWQ
This repository contains AWQ model files for Zephyr 7B Beta, offering efficient and accurate low - bit weight quantization for various inference scenarios.

TheBloke's LLM work is generously supported by a grant from andreessen horowitz (a16z)
✨ Features
- Quantized Model: Provides AWQ quantized model files for efficient inference.
- Multiple Inference Support: Compatible with various inference frameworks like text - generation - webui, vLLM, TGI, and AutoAWQ.
- High - Performance: Achieves high scores on benchmarks like MT - Bench and AlpacaEval.
📦 Installation
How to easily download and use this model in [text - generation - webui](https://github.com/oobabooga/text - generation - webui)
- Ensure you're using the latest version of [text - generation - webui](https://github.com/oobabooga/text - generation - webui). It's strongly recommended to use the one - click - installers unless you're confident with manual installation.
- Click the Model tab.
- Under Download custom model or LoRA, enter
TheBloke/zephyr - 7B - beta - AWQ
. - Click Download.
- Wait for the model to finish downloading. It will display "Done" upon completion.
- In the top left, click the refresh icon next to Model.
- In the Model dropdown, select the model you just downloaded:
zephyr - 7B - beta - AWQ
. - Select Loader: AutoAWQ.
- Click Load, and the model will load and be ready for use.
- If you want to set custom settings, configure them and then click Save settings for this model followed by Reload the Model in the top right.
💻 Usage Examples
Using vLLM
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''<|system|>
</s>
<|user|>
{{prompt}}</s>
<|assistant|>
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/zephyr - 7B - beta - AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
Using Hugging Face Text Generation Inference (TGI)
from huggingface_hub import InferenceClient
endpoint_url = "https://your - endpoint - url - here"
prompt = "Tell me about AI"
prompt_template=f'''<|system|>
</s>
<|user|>
{{prompt}}</s>
<|assistant|>
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
Using AutoAWQ
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_name_or_path = "TheBloke/zephyr - 7B - beta - AWQ"
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False)
# Load model
model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
trust_remote_code=False, safetensors=True)
prompt = "Tell me about AI"
prompt_template=f'''<|system|>
</s>
<|user|>
{{prompt}}</s>
<|assistant|>
'''
print("*** Running model.generate:")
token_input = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
# Generate output
generation_output = model.generate(
token_input,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
max_new_tokens=512
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("LLM output: ", text_output)
📚 Documentation
Repositories available
- [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/zephyr - 7B - beta - AWQ)
- [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/zephyr - 7B - beta - GPTQ)
- [2, 3, 4, 5, 6 and 8 - bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/zephyr - 7B - beta - GGUF)
- [Hugging Face H4's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/HuggingFaceH4/zephyr - 7b - beta)
Prompt template: Zephyr
<|system|>
</s>
<|user|>
{prompt}</s>
<|assistant|>
Provided files, and AWQ parameters
For the first release of AWQ models, only 128g models are released. 32g models may be added in the future if there is interest and after thorough testing with AutoAWQ and vLLM. Models are released as sharded safetensors files.
Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
---|---|---|---|---|---|
[main](https://huggingface.co/TheBloke/zephyr - 7B - beta - AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext - 2 - v1/test) | 4096 | 4.15 GB |
🔧 Technical Details
Compatibility
The provided files are tested to work with:
- [text - generation - webui](https://github.com/oobabooga/text - generation - webui) using
Loader: AutoAWQ
. - [vLLM](https://github.com/vllm - project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text - generation - inference) version 1.1.0 and later.
- [AutoAWQ](https://github.com/casper - hansen/AutoAWQ) version 0.1.1 and later.
Model Performance
At the time of release, Zephyr - 7B - Beta is the highest - ranked 7B chat model on the [MT - Bench](https://huggingface.co/spaces/lmsys/mt - bench) and [AlpacaEval](https://tatsu - lab.github.io/alpaca_eval/) benchmarks:
Model | Size | Alignment | MT - Bench (score) | AlpacaEval (win rate %) |
---|---|---|---|---|
StableLM - Tuned - Alpha | 7B | dSFT | 2.75 | - |
MPT - Chat | 7B | dSFT | 5.42 | - |
Xwin - LMv0.1 | 7B | dPPO | 6.19 | 87.83 |
Mistral - Instructv0.1 | 7B | - | 6.84 | - |
Zephyr - 7b - Alpha | 7B | dDPO | 6.88 | - |
Zephyr - 7b - Beta | 7B | dDPO | 7.34 | 90.60 |
Falcon - Instruct | 40B | dSFT | 5.17 | 45.71 |
Guanaco | 65B | SFT | 6.41 | 71.80 |
Llama2 - Chat | 70B | RLHF | 6.86 | 92.66 |
Vicuna v1.3 | 33B | dSFT | 7.12 | 88.99 |
WizardLM v1.0 | 70B | dSFT | 7.71 | - |
Xwin - LM v0.1 | 70B | dPPO | - | 95.57 |
GPT - 3.5 - turbo | - | RLHF | 7.94 | 89.37 |
Claude 2 | - | RLHF | 8.06 | 91.36 |
GPT - 4 | - | RLHF | 8.99 | 95.28 |
Training and evaluation data
During DPO training, this model achieves the following results on the evaluation set:
- Loss: 0.7496
- Rewards/chosen: - 4.5221
- Rewards/rejected: - 8.3184
- Rewards/accuracies: 0.7812
📄 License
The model is licensed under the MIT license.
Discord
For further support and discussions on these models and AI in general, join us at: TheBloke AI's Discord server
Thanks, and how to contribute
Thanks to the chirper.ai team and Clay from [gpus.llm - utils.org](llm - utils)!
If you're able and willing to contribute, it will be greatly appreciated and will help in providing more models and starting new AI projects. Donaters will get priority support on AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
- Patreon: https://patreon.com/TheBlokeAI
- Ko - Fi: https://ko - fi.com/TheBlokeAI
Special thanks to: Aemon Algiz.
Patreon special mentions: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann - Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjareholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all generous patrons and donaters! And thank you again to a16z for their generous grant.
Original model card: Hugging Face H4's Zephyr 7B Beta
Model Card for Zephyr 7B Beta
Zephyr is a series of language models trained to be helpful assistants. Zephyr - 7B - Beta is the second model in the series and is a fine - tuned version of [mistralai/Mistral - 7B - v0.1](https://huggingface.co/mistralai/Mistral - 7B - v0.1) trained on a mix of publicly available, synthetic datasets using Direct Preference Optimization (DPO). Removing the in - built alignment of these datasets improved performance on [MT Bench](https://huggingface.co/spaces/lmsys/mt - bench) and made the model more helpful. However, the model may generate problematic text when prompted and should only be used for educational and research purposes. More details can be found in the technical report.
Model description
- Model type: A 7B parameter GPT - like model fine - tuned on a mix of publicly available, synthetic datasets.
- Language(s) (NLP): Primarily English
- License: MIT
- Finetuned from model: [mistralai/Mistral - 7B - v0.1](https://huggingface.co/mistralai/Mistral - 7B - v0.1)
Model Sources
- Repository: https://github.com/huggingface/alignment - handbook
- Demo: https://huggingface.co/spaces/HuggingFaceH4/zephyr - chat
- Chatbot Arena: Evaluate Zephyr 7B against 10+ LLMs in the LMSYS arena: http://arena.lmsys.org
Intended uses & limitations
The model was initially fine - tuned on a filtered and preprocessed version of the UltraChat
dataset, which contains diverse synthetic dialogues generated by ChatGPT. Then, it was further aligned with TRL's DPOTrainer
on the openbmb/UltraFeedback dataset. It can be used for chat, and you can test its capabilities using the [demo](https://huggingface.co/spaces/HuggingFaceH4/zephyr - chat).
Bias, Risks, and Limitations
Zephyr - 7B - Beta has not been aligned to human preferences using techniques like RLHF or deployed with in - the - loop filtering of responses like ChatGPT, so it may produce problematic outputs, especially when prompted. The size and composition of the corpus used to train the base model (mistralai/Mistral - 7B - v0.1
) are unknown, but it likely includes a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon - 180B#training - data) for an example.
Training and evaluation data
During DPO training, this model achieves the following results on the evaluation set:
- Loss: 0.7496
- Rewards/chosen: - 4.5221
- Rewards/rejected: - 8.3184
- Rewards/accuracies: 0.7812

