Model Overview
Model Features
Model Capabilities
Use Cases
🚀 StableVicuna-13B-GPTQ
This repo contains 4-bit GPTQ format quantized models of CarperAI's StableVicuna 13B. It solves the problem of efficient deployment of large language models by providing quantized models, enabling faster inference on GPUs and CPUs.
🚀 Quick Start
This repo contains 4bit GPTQ format quantised models of CarperAI's StableVicuna 13B.
It is the result of first merging the deltas from the above repository with the original Llama 13B weights, then quantising to 4bit using GPTQ-for-LLaMa.
✨ Features
- Multiple Repository Options:
- Specific Prompt Template: This model works best with the following prompt template:
### Human: your prompt here
### Assistant:
📦 Installation
How to easily download and use this model in text-generation-webui
- Open the text-generation-webui UI as normal.
- Click the Model tab.
- Under Download custom model or LoRA, enter
TheBloke/stable-vicuna-13B-GPTQ
. - Click Download.
- Wait until it says it's finished downloading.
- Click the Refresh icon next to Model in the top left.
- In the Model drop-down: choose the model you just downloaded,
stable-vicuna-13B-GPTQ
. - Once it says it's loaded, click the Text Generation tab and enter a prompt!
💻 Usage Examples
Basic Usage
The basic way to use this model is by following the provided prompt template:
### Human: your prompt here
### Assistant:
📚 Documentation
Provided files
- Compatible file - stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors
- In the
main
branch - the default one - you will findstable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors
. - This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility.
- It was created without the
--act-order
parameter. It may have slightly lower inference quality compared to the other file, but is guaranteed to work on all versions of GPTQ-for-LLaMa and text-generation-webui. - Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches.
- Works with text-generation-webui one-click-installers.
- Parameters: Groupsize = 128g. No act-order.
- Command used to create the GPTQ:
- In the
CUDA_VISIBLE_DEVICES=0 python3 llama.py stable-vicuna-13B-HF c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors stable-vicuna-13B-GPTQ-4bit.no-act-order.safetensors
- Latest file - stable-vicuna-13B-GPTQ-4bit.latest.act-order.safetensors
- Created for more recent versions of GPTQ-for-LLaMa, and uses the
--act-order
flag for maximum theoretical performance. - To access this file, please switch to the
latest
branch fo this repo and download from there. - Only works with recent GPTQ-for-LLaMa code.
- Does not work with text-generation-webui one-click-installers.
- Parameters: Groupsize = 128g. act-order.
- Offers highest quality quantisation, but requires recent GPTQ-for-LLaMa code.
- Command used to create the GPTQ:
- Created for more recent versions of GPTQ-for-LLaMa, and uses the
CUDA_VISIBLE_DEVICES=0 python3 llama.py stable-vicuna-13B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors stable-vicuna-13B-GPTQ-4bit.act-order.safetensors
Manual instructions for text-generation-webui
- File
stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors
can be loaded the same as any other GPTQ file, without requiring any updates to oobaboogas text-generation-webui. - Instructions on using GPTQ 4bit files in text-generation-webui are here.
- The other
safetensors
model file was created using--act-order
to give the maximum possible quantisation quality, but this means it requires that the latest GPTQ-for-LLaMa is used inside the UI. - If you want to use the act-order
safetensors
files and need to update the Triton branch of GPTQ-for-LLaMa, here are the commands I used to clone the Triton branch of GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI:
# Clone text-generation-webui, if you don't already have it
git clone https://github.com/oobabooga/text-generation-webui
# Make a repositories directory
mkdir text-generation-webui/repositories
cd text-generation-webui/repositories
# Clone the latest GPTQ-for-LLaMa code inside text-generation-webui
git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa
- Then install this model into
text-generation-webui/models
and launch the UI as follows:
cd text-generation-webui
python server.py --model stable-vicuna-13B-GPTQ --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want
- The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information.
- If you can't update GPTQ-for-LLaMa or don't want to, you can use
stable-vicuna-13B-GPTQ-4bit.no-act-order.safetensors
as mentioned above, which should work without any upgrades to text-generation-webui.
🔧 Technical Details
Original StableVicuna-13B model card
Model Description
StableVicuna-13B is a Vicuna-13B v0 model fine-tuned using reinforcement learning from human feedback (RLHF) via Proximal Policy Optimization (PPO) on various conversational and instructional datasets.
Model Details
Property | Details |
---|---|
Model Type | StableVicuna-13B is an auto-regressive language model based on the LLaMA transformer architecture. |
Trained by | Duy Phung of CarperAI |
Language(s) | English |
Library | trlX |
License for delta weights | CC-BY-NC-SA-4.0 Note: License for the base LLaMA model's weights is Meta's non-commercial bespoke license. |
Contact | For questions and comments about the model, visit the CarperAI and StableFoundation Discord servers. |
\(n_\text{parameters}\) | 13B |
\(d_\text{model}\) | 5120 |
\(n_\text{layers}\) | 40 |
\(n_\text{heads}\) | 40 |
Training
Training Dataset
StableVicuna-13B is fine-tuned on a mix of three datasets. OpenAssistant Conversations Dataset (OASST1), a human-generated, human-annotated assistant-style conversation corpus consisting of 161,443 messages distributed across 66,497 conversation trees, in 35 different languages; GPT4All Prompt Generations, a dataset of 400k prompts and responses generated by GPT-4; and Alpaca, a dataset of 52,000 instructions and demonstrations generated by OpenAI's text-davinci-003 engine.
The reward model used during RLHF was also trained on OpenAssistant Conversations Dataset (OASST1) along with two other datasets: Anthropic HH-RLHF, a dataset of preferences about AI assistant helpfulness and harmlessness; and Stanford Human Preferences Dataset a dataset of 385K collective human preferences over responses to questions/instructions in 18 different subject areas, from cooking to legal advice.
Training Procedure
CarperAI/stable-vicuna-13b-delta
was trained using PPO as implemented in trlX
with the following configuration:
Hyperparameter | Value |
---|---|
num_rollouts | 128 |
chunk_size | 16 |
ppo_epochs | 4 |
init_kl_coef | 0.1 |
target | 6 |
horizon | 10000 |
gamma | 1 |
lam | 0.95 |
cliprange | 0.2 |
cliprange_value | 0.2 |
vf_coef | 1.0 |
scale_reward | None |
cliprange_reward | 10 |
generation_kwargs | |
max_length | 512 |
min_length | 48 |
top_k | 0.0 |
top_p | 1.0 |
do_sample | True |
temperature | 1.0 |
Use and Limitations
Intended Use
This model is intended to be used for text generation with a focus on conversational tasks. Users may further fine-tune the model on their own data to improve the model's performance on their specific tasks in accordance with the non-commercial license.
📄 License
The license for delta weights is CC-BY-NC-SA-4.0. Note that the license for the base LLaMA model's weights is Meta's non-commercial bespoke license.
Discord
For further support, and discussions on these models and AI in general, join us at: TheBloke AI's Discord server
Thanks, and how to contribute.
Thanks to the chirper.ai team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
- Patreon: https://patreon.com/TheBlokeAI
- Ko-Fi: https://ko-fi.com/TheBlokeAI
Special thanks to: Aemon Algiz.
Patreon special mentions: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.

