# Mathematical reasoning optimization
Unireason Qwen3 14B RL GGUF
Apache-2.0
A static quantization version of UniReason-Qwen3-14B-RL, suitable for text generation and mathematical reasoning research scenarios.
Large Language Model
Transformers English

U
mradermacher
272
1
Mimo 7B RL 0530
MIT
MiMo is a series of 7B parameter models trained from scratch for inference tasks. Through optimized pre-training and post-training strategies, it performs excellently in mathematical and code reasoning tasks.
Large Language Model
Transformers

M
XiaomiMiMo
319
17
ALP DeepScaleR 1.5B C16K
Apache-2.0
ALP_DeepScaleR_1.5B_C16K is a model trained using the Adaptive Length Penalty (ALP) method based on the DeepScaleR-1.5B model, which can significantly reduce token usage while maintaining performance.
Large Language Model
Safetensors
A
SynthLabsAI
333
1
Multiverse 32B
Apache-2.0
Multiverse-32B is the first open-source, non-autoregressive model built on Multiverse. It performs excellently in the AIME test and has important academic and application value.
Large Language Model
Transformers

M
Multiverse4FM
11.03k
1
Qwen3 30B A3B Quantized.w4a16
Apache-2.0
INT4 quantized version of Qwen3-30B-A3B, reducing disk and GPU memory requirements by 75% while maintaining high performance.
Large Language Model
Transformers

Q
RedHatAI
379
2
Phi 4 Mini Reasoning GGUF
MIT
Phi-4-mini-reasoning is a lightweight open model built on synthetic data, focusing on high-quality, reasoning-rich data, and further fine-tuned for more advanced mathematical reasoning capabilities.
Large Language Model
Transformers

P
Mungert
3,592
3
Phi 4 Reasoning Unsloth Bnb 4bit
MIT
Phi-4-reasoning is an advanced reasoning model developed by Microsoft, fine-tuned based on Phi-4, focusing on improving reasoning abilities in fields such as mathematics, science, and coding.
Large Language Model
Transformers Supports Multiple Languages

P
unsloth
1,969
2
Phi 4 Mini Reasoning Unsloth Bnb 4bit
MIT
Phi-4-mini-reasoning is a lightweight open-source model focused on mathematical reasoning, supporting a context length of 128K tokens and suitable for environments with limited computing resources.
Large Language Model
Transformers Supports Multiple Languages

P
unsloth
2,329
5
Microsoft Phi 4 Mini Reasoning GGUF
MIT
This is a quantized version of the Microsoft Phi - 4 - mini - reasoning model, which is quantized using the llamacpp tool to improve the model's operating efficiency and performance in different hardware environments.
Large Language Model Supports Multiple Languages
M
bartowski
1,667
7
Qwen3 0.6B GGUF
A quantized version of Tongyi Qianwen 3 0.6B, suitable for text generation tasks, supporting a 32k context length and multilingual processing.
Large Language Model
Q
lmstudio-community
9,063
5
Nvidia OpenMath Nemotron 14B Kaggle GGUF
This is a 14B-parameter large mathematical language model open-sourced by NVIDIA. It has been quantized by llama.cpp and can run efficiently under different hardware conditions.
Large Language Model English
N
bartowski
432
1
Fluentlylm Prinum
MIT
The first standalone model of the Fluently Language Model project, a 32.5B parameter causal language model supporting multiple languages and tasks.
Large Language Model
Transformers Supports Multiple Languages

F
fluently-lm
241
28
Llama 3.1 Tulu 3.1 8B
Tülu 3 is a leading family of instruction-following models, offering fully open-source data, code, and training methodologies as a comprehensive guide to modern technology. Version 3.1 features improvements in the reinforcement learning phase, delivering enhanced overall performance.
Large Language Model
Transformers English

L
allenai
3,643
33
Seallm 7B V2
Other
SeaLLM-7B-v2 is a state-of-the-art multilingual large language model for Southeast Asian languages, with half the size but superior performance in multilingual tasks such as world knowledge, mathematical reasoning, and instruction following.
Large Language Model
Transformers Supports Multiple Languages

S
SeaLLMs
1,993
66
Featured Recommended AI Models