đ Qwen2-Audio-7B-Instruct Quantized Model
This project provides quantized weights of Qwen2-Audio-7B-Instruct, facilitating efficient deployment and usage.
đ Quick Start
If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files.
⨠Features
- Quantized Weights: Offers weighted/imatrix quants of https://huggingface.co/Qwen/Qwen2-Audio-7B-Instruct.
- Multiple Quant Types: Provides a variety of quant types with different sizes and qualities.
đ Documentation
About
Weighted/imatrix quants of https://huggingface.co/Qwen/Qwen2-Audio-7B-Instruct. Static quants are available at https://huggingface.co/mradermacher/Qwen2-Audio-7B-Instruct-GGUF.
Provided Quants
(sorted by size, not necessarily quality. IQ - quants are often preferable over similar sized non - IQ quants)
Link |
Type |
Size/GB |
Notes |
GGUF |
i1-IQ1_S |
2.2 |
for the desperate |
GGUF |
i1-IQ1_M |
2.3 |
mostly desperate |
GGUF |
i1-IQ2_XXS |
2.5 |
|
GGUF |
i1-IQ2_XS |
2.7 |
|
GGUF |
i1-IQ2_S |
2.9 |
|
GGUF |
i1-Q2_K_S |
3.0 |
very low quality |
GGUF |
i1-IQ2_M |
3.0 |
|
GGUF |
i1-Q2_K |
3.2 |
IQ3_XXS probably better |
GGUF |
i1-IQ3_XXS |
3.3 |
lower quality |
GGUF |
i1-IQ3_XS |
3.5 |
|
GGUF |
i1-IQ3_S |
3.7 |
beats Q3_K* |
GGUF |
i1-Q3_K_S |
3.7 |
IQ3_XS probably better |
GGUF |
i1-IQ3_M |
3.9 |
|
GGUF |
i1-Q3_K_M |
4.0 |
IQ3_S probably better |
GGUF |
i1-Q3_K_L |
4.3 |
IQ3_M probably better |
GGUF |
i1-IQ4_XS |
4.4 |
|
GGUF |
i1-IQ4_NL |
4.6 |
prefer IQ4_XS |
GGUF |
i1-Q4_0 |
4.6 |
fast, low quality |
GGUF |
i1-Q4_K_S |
4.7 |
optimal size/speed/quality |
GGUF |
i1-Q4_K_M |
4.9 |
fast, recommended |
GGUF |
i1-Q4_1 |
5.1 |
|
GGUF |
i1-Q5_K_S |
5.5 |
|
GGUF |
i1-Q5_K_M |
5.7 |
|
GGUF |
i1-Q6_K |
6.5 |
practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower - quality quant types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized.
Thanks
I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
đ License
This project is licensed under the apache - 2.0 license.