Model Overview
Model Features
Model Capabilities
Use Cases
🚀 DiscoLM 70B - GGUF
This repository contains GGUF format model files for Disco Research's DiscoLM 70B, offering various quantization options for different use - cases.
📚 Documentation
Model Information
Property | Details |
---|---|
Base Model | DiscoResearch/DiscoLM - 70b |
Datasets | Open - Orca/SlimOrca - Dedup, teknium/openhermes, meta - math/MetaMathQA, migtissera/Synthia - v1.3, THUDM/AgentInstruct, LeoLM/German_Songs, LeoLM/German_Poems, LeoLM/OpenSchnabeltier, bjoernp/ultrachat_de |
Inference | false |
Languages | en, de |
Library Name | transformers |
License | llama2 |
Model Creator | Disco Research |
Model Name | DiscoLM 70B |
Model Type | llama |
Pipeline Tag | text - generation |
Prompt Template | `< |
Quantized By | TheBloke |
Tags | goliath, deutsch, llama2, discoresearch |
About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
- llama.cpp. The source project for GGUF. Offers a CLI and a server option.
- [text - generation - webui](https://github.com/oobabooga/text - generation - webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
- KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
- GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
- LM Studio, an easy - to - use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms - webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
- Faraday.dev, an attractive and easy to use character - based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
- [llama - cpp - python](https://github.com/abetlen/llama - cpp - python), a Python library with GPU accel, LangChain support, and OpenAI - compatible API server.
- candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use.
- ctransformers, a Python library with GPU accel, LangChain support, and OpenAI - compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
Repositories available
- [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/DiscoLM - 70B - AWQ)
- [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/DiscoLM - 70B - GPTQ)
- [2, 3, 4, 5, 6 and 8 - bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/DiscoLM - 70B - GGUF)
- [Disco Research's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/DiscoResearch/DiscoLM - 70b)
Prompt template: ChatML
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit d0cee0d.
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
Explanation of quantisation methods
Click to see details
The new methods available are:
- GGML_TYPE_Q2_K - "type - 1" 2 - bit quantization in super - blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
- GGML_TYPE_Q3_K - "type - 0" 3 - bit quantization in super - blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
- GGML_TYPE_Q4_K - "type - 1" 4 - bit quantization in super - blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
- GGML_TYPE_Q5_K - "type - 1" 5 - bit quantization. Same super - block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
- GGML_TYPE_Q6_K - "type - 0" 6 - bit quantization. Super - blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
Provided files
Name | Quant method | Bits | Size | Max RAM required | Use case |
---|---|---|---|---|---|
[discolm - 70b.Q2_K.gguf](https://huggingface.co/TheBloke/DiscoLM - 70B - GGUF/blob/main/discolm - 70b.Q2_K.gguf) | Q2_K | 2 | 29.28 GB | 31.78 GB | smallest, significant quality loss - not recommended for most purposes |
[discolm - 70b.Q3_K_S.gguf](https://huggingface.co/TheBloke/DiscoLM - 70B - GGUF/blob/main/discolm - 70b.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB | 32.42 GB | very small, high quality loss |
[discolm - 70b.Q3_K_M.gguf](https://huggingface.co/TheBloke/DiscoLM - 70B - GGUF/blob/main/discolm - 70b.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB | 35.69 GB | very small, high quality loss |
[discolm - 70b.Q3_K_L.gguf](https://huggingface.co/TheBloke/DiscoLM - 70B - GGUF/blob/main/discolm - 70b.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB | 38.65 GB | small, substantial quality loss |
[discolm - 70b.Q4_0.gguf](https://huggingface.co/TheBloke/DiscoLM - 70B - GGUF/blob/main/discolm - 70b.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB | 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
[discolm - 70b.Q4_K_S.gguf](https://huggingface.co/TheBloke/DiscoLM - 70B - GGUF/blob/main/discolm - 70b.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB | 41.57 GB | small, greater quality loss |
[discolm - 70b.Q4_K_M.gguf](https://huggingface.co/TheBloke/DiscoLM - 70B - GGUF/blob/main/discolm - 70b.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB | 43.92 GB | medium, balanced quality - recommended |
[discolm - 70b.Q5_0.gguf](https://huggingface.co/TheBloke/DiscoLM - 70B - GGUF/blob/main/discolm - 70b.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB | 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
[discolm - 70b.Q5_K_S.gguf](https://huggingface.co/TheBloke/DiscoLM - 70B - GGUF/blob/main/discolm - 70b.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB | 49.96 GB | large, low quality loss - recommended |
[discolm - 70b.Q5_K_M.gguf](https://huggingface.co/TheBloke/DiscoLM - 70B - GGUF/blob/main/discolm - 70b.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB | 51.25 GB | large, very low quality loss - recommended |
discolm - 70b.Q6_K.gguf | Q6_K | 6 | 56.59 GB | 59.09 GB | very large, extremely low quality loss |
discolm - 70b.Q8_0.gguf | Q8_0 | 8 | 73.29 GB | 75.79 GB | very large, extremely low quality loss - not recommended |
Note: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
Q6_K and Q8_0 files are split and require joining
Note: HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.
Click for instructions regarding Q6_K and Q8_0 files
q6_K
Please download:
discolm - 70b.Q6_K.gguf - split - a
discolm - 70b.Q6_K.gguf - split - b
q8_0
Please download:
discolm - 70b.Q8_0.gguf - split - a
discolm - 70b.Q8_0.gguf - split - b
To join the files, do the following:
Linux and macOS:
cat discolm - 70b.Q6_K.gguf - split - * > discolm - 70b.Q6_K.gguf && rm discolm - 70b.Q6_K.gguf - split - *
cat discolm - 70b.Q8_0.gguf - split - * > discolm - 70b.Q8_0.gguf && rm discolm - 70b.Q8_0.gguf - split - *
Windows command line:
COPY /B discolm - 70b.Q6_K.gguf - split - a + discolm - 70b.Q6_K.gguf - split - b discolm - 70b.Q6_K.gguf
del discolm - 70b.Q6_K.gguf - split - a discolm - 70b.Q6_K.gguf - split - b
COPY /B discolm - 70b.Q8_0.gguf - split - a + discolm - 70b.Q8_0.gguf - split - b discolm - 70b.Q8_0.gguf
del discolm - 70b.Q8_0.gguf - split - a discolm - 70b.Q8_0.gguf - split - b
How to download GGUF files
Note for manual downloaders: You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
In text - generation - webui
Under Download Model, you can enter the model repo: TheBloke/DiscoLM - 70B - GGUF and below it, a specific filename to download, such as: discolm - 70b.Q4_K_M.gguf.
Then click Download.
On the command line, including multiple files at once
I recommend using the huggingface - hub
Python library:
pip3 install huggingface - hub
Then you can download any individual model file to the current directory, at high speed, with a command like this:
huggingface - cli download TheBloke/DiscoLM - 70B - GGUF discolm - 70b.Q4_K_M.gguf --local - dir . --local - dir - use - symlinks False
More advanced huggingface - cli download usage (click to read)
You can also download multiple files at once with a pattern:
huggingface - cli download TheBloke/DiscoLM - 70B - GGUF --local - dir . --local - dir - use - symlinks False --include='*Q4_K*gguf'
For more documentation on downloading with huggingface - cli
, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download - from - the - cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install hf_transfer
:
pip3 install hf_transfer
And set environment variable HF_HUB_ENABLE_HF_TRANSFER
to 1
:
HF_HUB_ENABLE_HF_TRANSFER = 1 huggingface - cli download TheBloke/DiscoLM - 70B - GGUF discolm - 70b.Q4_K_M.gguf --local - dir . --local - dir - use - symlinks False
Windows Command Line users: You can set the environment variable by running set HF_HUB_ENABLE_HF_TRANSFER = 1
before the download command.
Example llama.cpp
command
Make sure you are using llama.cpp
from commit d0cee0d or later.
./main -ngl 35 -m discolm - 70b.Q4_K_M.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
Change -ngl 32
to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change -c 8192
to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K -

