🚀 Huihui-gemma-3n-E4B-it-abliterated
This project provides static quants of the model, supporting various speech and text processing tasks.
🚀 Quick Start
Access Requirements
To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately.
Usage Guide
If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi - part files.
✨ Features
- Multilingual Capabilities: Tags indicate support for automatic speech recognition, translation, and text - to - text tasks across audio and video mediums.
- Abliterated and Uncensored: The model offers an uncensored version, suitable for specific use - cases.
📦 Installation
No specific installation steps are provided in the original document.
💻 Usage Examples
No code examples are provided in the original document.
📚 Documentation
About
This is a static quant of https://huggingface.co/huihui - ai/Huihui - gemma - 3n - E4B - it - abliterated. For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Huihui - gemma - 3n - E4B - it - abliterated - GGUF). Weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Provided Quants
The provided quantized models are sorted by size (not necessarily quality. IQ - quants are often preferable over similar sized non - IQ quants).
Link |
Type |
Size/GB |
Notes |
[GGUF](https://huggingface.co/mradermacher/Huihui - gemma - 3n - E4B - it - abliterated - GGUF/resolve/main/Huihui - gemma - 3n - E4B - it - abliterated.Q2_K.gguf) |
Q2_K |
2.9 |
|
[GGUF](https://huggingface.co/mradermacher/Huihui - gemma - 3n - E4B - it - abliterated - GGUF/resolve/main/Huihui - gemma - 3n - E4B - it - abliterated.Q3_K_S.gguf) |
Q3_K_S |
3.4 |
|
[GGUF](https://huggingface.co/mradermacher/Huihui - gemma - 3n - E4B - it - abliterated - GGUF/resolve/main/Huihui - gemma - 3n - E4B - it - abliterated.Q3_K_M.gguf) |
Q3_K_M |
3.5 |
lower quality |
[GGUF](https://huggingface.co/mradermacher/Huihui - gemma - 3n - E4B - it - abliterated - GGUF/resolve/main/Huihui - gemma - 3n - E4B - it - abliterated.Q3_K_L.gguf) |
Q3_K_L |
3.7 |
|
[GGUF](https://huggingface.co/mradermacher/Huihui - gemma - 3n - E4B - it - abliterated - GGUF/resolve/main/Huihui - gemma - 3n - E4B - it - abliterated.IQ4_XS.gguf) |
IQ4_XS |
4.0 |
|
[GGUF](https://huggingface.co/mradermacher/Huihui - gemma - 3n - E4B - it - abliterated - GGUF/resolve/main/Huihui - gemma - 3n - E4B - it - abliterated.Q4_K_S.gguf) |
Q4_K_S |
4.2 |
fast, recommended |
[GGUF](https://huggingface.co/mradermacher/Huihui - gemma - 3n - E4B - it - abliterated - GGUF/resolve/main/Huihui - gemma - 3n - E4B - it - abliterated.Q4_K_M.gguf) |
Q4_K_M |
4.3 |
fast, recommended |
[GGUF](https://huggingface.co/mradermacher/Huihui - gemma - 3n - E4B - it - abliterated - GGUF/resolve/main/Huihui - gemma - 3n - E4B - it - abliterated.Q5_K_S.gguf) |
Q5_K_S |
5.0 |
|
[GGUF](https://huggingface.co/mradermacher/Huihui - gemma - 3n - E4B - it - abliterated - GGUF/resolve/main/Huihui - gemma - 3n - E4B - it - abliterated.Q5_K_M.gguf) |
Q5_K_M |
5.0 |
|
[GGUF](https://huggingface.co/mradermacher/Huihui - gemma - 3n - E4B - it - abliterated - GGUF/resolve/main/Huihui - gemma - 3n - E4B - it - abliterated.Q6_K.gguf) |
Q6_K |
5.8 |
very good quality |
[GGUF](https://huggingface.co/mradermacher/Huihui - gemma - 3n - E4B - it - abliterated - GGUF/resolve/main/Huihui - gemma - 3n - E4B - it - abliterated.Q8_0.gguf) |
Q8_0 |
7.5 |
fast, best quality |
[GGUF](https://huggingface.co/mradermacher/Huihui - gemma - 3n - E4B - it - abliterated - GGUF/resolve/main/Huihui - gemma - 3n - E4B - it - abliterated.f16.gguf) |
f16 |
13.8 |
16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower - quality quant types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized.
🔧 Technical Details
No technical details are provided in the original document.
📄 License
The model is under the gemma license.