đ VeriReason-Qwen2.5-1.5B-grpo-small Quantized Model
This project provides static quantizations of the VeriReason-Qwen2.5-1.5B-grpo-small model. It offers various quantized versions for different usage scenarios, with details about the model, usage instructions, and provided quantized files.
đ Quick Start
If you're new to using this model, the following sections will guide you through its basic information, usage, and available quantized versions.
⨠Features
- Multiple Quantized Versions: Offers a range of quantized versions in GGUF format, sorted by size, providing options for different performance and quality requirements.
- Useful References: Provides links to external resources such as TheBloke's READMEs for GGUF file usage, and graphs and thoughts on quantization by other contributors.
đ Documentation
About
Static quants of Nellyw888/VeriReason-Qwen2.5-1.5B-grpo-small.
Weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files.
Provided Quants
(sorted by size, not necessarily quality. IQ - quants are often preferable over similar sized non - IQ quants)
Link |
Type |
Size/GB |
Notes |
GGUF |
Q2_K |
0.8 |
|
GGUF |
Q3_K_S |
0.9 |
|
GGUF |
Q3_K_M |
0.9 |
lower quality |
GGUF |
Q3_K_L |
1.0 |
|
GGUF |
IQ4_XS |
1.0 |
|
GGUF |
Q4_K_S |
1.0 |
fast, recommended |
GGUF |
Q4_K_M |
1.1 |
fast, recommended |
GGUF |
Q5_K_S |
1.2 |
|
GGUF |
Q5_K_M |
1.2 |
|
GGUF |
Q6_K |
1.4 |
very good quality |
GGUF |
Q8_0 |
1.7 |
fast, best quality |
GGUF |
f16 |
3.2 |
16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower - quality quant types (lower is better):

And here are Artefact2's thoughts on the matter:
Artefact2's thoughts
FAQ / Model Request
See mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized.
Thanks
I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
đ Information Table
Property |
Details |
Base Model |
Nellyw888/VeriReason-Qwen2.5-1.5B-grpo-small |
Datasets |
- Nellyw888/RTL-Coder_7b_reasoning_tb_simple - Nellyw888/RTL-Coder_small |
Language |
en |
Library Name |
transformers |
Quantized By |
mradermacher |
Tags |
- verilog - reasoning - reinforcement-learning - rtl |