🚀 Llamacpp imatrix Quantizations of Q3-30b-A3b-Pentiment by allura-org
This project provides quantized versions of the Q3-30b-A3b-Pentiment model by allura-org, using llama.cpp for quantization. It offers various quantization types with different file sizes and qualities, suitable for different hardware and usage scenarios.
🚀 Quick Start
- Using LM Studio: You can run the quantized models in LM Studio.
- Using llama.cpp: Run them directly with llama.cpp, or any other llama.cpp based project.
✨ Features
- Multiple Quantization Types: Offers a wide range of quantization types, such as bf16, Q8_0, Q6_K_L, etc., to meet different requirements in terms of quality and file size.
- Online Repacking: Some quantization types support online repacking, which can improve performance on ARM and AVX machines.
- Prompt Format: Defines a specific prompt format for interaction with the model.
📦 Installation
Prerequisites
Make sure you have huggingface-cli
installed:
pip install -U "huggingface_hub[cli]"
Downloading a Specific File
huggingface-cli download bartowski/allura-org_Q3-30b-A3b-Pentiment-GGUF --include "allura-org_Q3-30b-A3b-Pentiment-Q4_K_M.gguf" --local-dir ./
Downloading Split Files
If the model is bigger than 50GB and split into multiple files, run:
huggingface-cli download bartowski/allura-org_Q3-30b-A3b-Pentiment-GGUF --include "allura-org_Q3-30b-A3b-Pentiment-Q8_0/*" --local-dir ./
💻 Usage Examples
Prompt Format
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
📚 Documentation
Downloadable Files
Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
ARM/AVX information
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
Now, however, there is something called "online repacking" for weights. details in this PR. If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
As of llama.cpp build b4282 you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to this PR which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
Click to view Q4_0_X_X information (deprecated
I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
Click to view benchmarks on an AVX2 system (EPYC7702)
model |
size |
params |
backend |
threads |
test |
t/s |
% (vs Q4_0) |
qwen2 3B Q4_0 |
1.70 GiB |
3.09 B |
CPU |
64 |
pp512 |
204.03 ± 1.03 |
100% |
qwen2 3B Q4_0 |
1.70 GiB |
3.09 B |
CPU |
64 |
pp1024 |
282.92 ± 0.19 |
100% |
... (other benchmarks) |
... |
... |
... |
... |
... |
... |
... |
Q4_0_8_8 offers a nice bump to prompt
🔧 Technical Details
- Quantization Method: Using llama.cpp release b5432 for quantization.
- Original model: https://huggingface.co/allura-org/Q3-30b-A3b-Pentiment
- **All quants made using imatrix option with dataset from here