Model Overview
Model Features
Model Capabilities
Use Cases
đ Phi-4-mini-reasoning GGUF Models
Phi-4-mini-reasoning GGUF models are designed for text generation tasks, especially excelling in multi-step, logic-intensive mathematical problem-solving under memory and compute constraints. These models offer high-quality, step-by-step problem-solving capabilities, making them suitable for various mathematical reasoning scenarios.
đ Quick Start
To start using the Phi-4-mini-reasoning model, you need to have the required packages installed. The model has been integrated into the 4.51.3
version of transformers
. You can verify the current transformers
version with pip list | grep transformers
. Python 3.8 and 3.10 are recommended.
The list of required packages is as follows:
flash_attn==2.7.4.post1
torch==2.5.1
transformers==4.51.3
accelerate==1.3.0
Here is an example code for inference:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "microsoft/Phi-4-mini-reasoning"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [{
"role": "user",
"content": "How to solve 3*x^2+4*x+5=1?"
}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)
outputs = model.generate(
**inputs.to(model.device),
max_new_tokens=32768,
temperature=0.8,
top_p=0.95,
do_sample=True,
)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])
⨠Features
Model Generation Details
This model was generated using llama.cpp at commit 19e899c
.
Ultra-Low-Bit Quantization with IQ-DynamicGate (1-2 bit)
Our latest quantization method introduces precision-adaptive quantization for ultra-low-bit models (1-2 bit), with benchmark-proven improvements on Llama-3-8B. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency.
Benchmark Context
All tests were conducted on Llama-3-8B-Instruct using:
- Standard perplexity evaluation pipeline
- 2048-token context window
- Same prompt set across all quantizations
Method
- Dynamic Precision Allocation:
- First/Last 25% of layers â IQ4_XS (selected layers)
- Middle 50% â IQ2_XXS/IQ3_S (increase efficiency)
- Critical Component Protection:
- Embeddings/output layers use Q5_K
- Reduces error propagation by 38% vs standard 1-2bit
Quantization Performance Comparison (Llama-3-8B)
Quantization | Standard PPL | DynamicGate PPL | Î PPL | Std Size | DG Size | Î Size | Std Speed | DG Speed |
---|---|---|---|---|---|---|---|---|
IQ2_XXS | 11.30 | 9.84 | -12.9% | 2.5G | 2.6G | +0.1G | 234s | 246s |
IQ2_XS | 11.72 | 11.63 | -0.8% | 2.7G | 2.8G | +0.1G | 242s | 246s |
IQ2_S | 14.31 | 9.02 | -36.9% | 2.7G | 2.9G | +0.2G | 238s | 244s |
IQ1_M | 27.46 | 15.41 | -43.9% | 2.2G | 2.5G | +0.3G | 206s | 212s |
IQ1_S | 53.07 | 32.00 | -39.7% | 2.1G | 2.4G | +0.3G | 184s | 209s |
Key:
- PPL = Perplexity (lower is better)
- Î PPL = Percentage change from standard to DynamicGate
- Speed = Inference time (CPU avx2, 2048 token context)
- Size differences reflect mixed quantization overhead
Key Improvements:
- IQ1_M shows a massive 43.9% perplexity reduction (27.46 â 15.41)
- IQ2_S cuts perplexity by 36.9% while adding only 0.2GB
- IQ1_S maintains 39.7% better accuracy despite 1-bit quantization
Tradeoffs:
- All variants have modest size increases (0.1-0.3GB)
- Inference speeds remain comparable (<5% difference)
When to Use These Models
- Fitting models into GPU VRAM
- Memory-constrained deployments
- CPU and Edge Devices where 1-2bit errors can be tolerated
- Research into ultra-low-bit quantization
Choosing the Right Model Format
Selecting the correct model format depends on your hardware capabilities and memory constraints.
BF16 (Brain Float 16) â Use if BF16 acceleration is available
- A 16-bit floating-point format designed for faster computation while retaining good precision.
- Provides similar dynamic range as FP32 but with lower memory usage.
- Recommended if your hardware supports BF16 acceleration (check your device's specs).
- Ideal for high-performance inference with reduced memory footprint compared to FP32.
Use BF16 if:
- Your hardware has native BF16 support (e.g., newer GPUs, TPUs).
- You want higher precision while saving memory.
- You plan to requantize the model into another format.
Avoid BF16 if:
- Your hardware does not support BF16 (it may fall back to FP32 and run slower).
- You need compatibility with older devices that lack BF16 optimization.
F16 (Float 16) â More widely supported than BF16
- A 16-bit floating-point high precision but with less of range of values than BF16.
- Works on most devices with FP16 acceleration support (including many GPUs and some CPUs).
- Slightly lower numerical precision than BF16 but generally sufficient for inference.
Use F16 if:
- Your hardware supports FP16 but not BF16.
- You need a balance between speed, memory usage, and accuracy.
- You are running on a GPU or another device optimized for FP16 computations.
Avoid F16 if:
- Your device lacks native FP16 support (it may run slower than expected).
- You have memory limitations.
Quantized Models (Q4_K, Q6_K, Q8, etc.) â For CPU & Low-VRAM Inference
Quantization reduces model size and memory usage while maintaining as much accuracy as possible.
- Lower-bit models (Q4_K) â Best for minimal memory usage, may have lower precision.
- Higher-bit models (Q6_K, Q8_0) â Better accuracy, requires more memory.
Use Quantized Models if:
- You are running inference on a CPU and need an optimized model.
- Your device has low VRAM and cannot load full-precision models.
- You want to reduce memory footprint while keeping reasonable accuracy.
Avoid Quantized Models if:
- You need maximum accuracy (full-precision models are better for this).
- Your hardware has enough VRAM for higher-precision formats (BF16/F16).
Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)
These models are optimized for extreme memory efficiency, making them ideal for low-power devices or large-scale deployments where memory is a critical constraint.
- IQ3_XS: Ultra-low-bit quantization (3-bit) with extreme memory efficiency.
- Use case: Best for ultra-low-memory devices where even Q4_K is too large.
- Trade-off: Lower accuracy compared to higher-bit quantizations.
- IQ3_S: Small block size for maximum memory efficiency.
- Use case: Best for low-memory devices where IQ3_XS is too aggressive.
- IQ3_M: Medium block size for better accuracy than IQ3_S.
- Use case: Suitable for low-memory devices where IQ3_S is too limiting.
- Q4_K: 4-bit quantization with block-wise optimization for better accuracy.
- Use case: Best for low-memory devices where Q6_K is too large.
- Q4_0: Pure 4-bit quantization, optimized for ARM devices.
- Use case: Best for ARM-based devices or low-memory environments.
Summary Table: Model Format Selection
Model Format | Precision | Memory Usage | Device Requirements | Best Use Case |
---|---|---|---|---|
BF16 | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory |
F16 | High | High | FP16-supported devices | GPU inference when BF16 isn't available |
Q4_K | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments |
Q6_K | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized |
Q8_0 | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models |
IQ3_XS | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy |
Q4_0 | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices |
Included Files & Details
Phi-4-mini-reasoning-bf16.gguf
- Model weights preserved in BF16.
- Use this if you want to requantize the model into a different format.
- Best if your device supports BF16 acceleration.
Phi-4-mini-reasoning-f16.gguf
- Model weights stored in F16.
- Use if your device supports FP16, especially if BF16 is not available.
Phi-4-mini-reasoning-bf16-q8_0.gguf
- Output & embeddings remain in BF16.
- All other layers quantized to Q8_0.
- Use if your device supports BF16 and you want a quantized version.
Phi-4-mini-reasoning-f16-q8_0.gguf
- Output & embeddings remain in F16.
- All other layers quantized to Q8_0.
Phi-4-mini-reasoning-q4_k.gguf
- Output & embeddings quantized to Q8_0.
- All other layers quantized to Q4_K.
- Good for CPU inference with limited memory.
Phi-4-mini-reasoning-q4_k_s.gguf
- Smallest Q4_K variant, using less memory at the cost of accuracy.
- Best for very low-memory setups.
Phi-4-mini-reasoning-q6_k.gguf
- Output & embeddings quantized to Q8_0.
- All other layers quantized to Q6_K.
Phi-4-mini-reasoning-q8_0.gguf
- Fully Q8 quantized model for better accuracy.
- Requires more memory but offers higher precision.
Phi-4-mini-reasoning-iq3_xs.gguf
- IQ3_XS quantization, optimized for extreme memory efficiency.
- Best for ultra-low-memory devices.
Phi-4-mini-reasoning-iq3_m.gguf
- IQ3_M quantization, offering a medium block size for better accuracy.
- Suitable for low-memory devices.
Phi-4-mini-reasoning-q4_0.gguf
- Pure Q4_0 quantization, optimized for ARM devices.
- Best for low-memory environments.
- Prefer IQ4_NL for better accuracy.
đ Documentation
Intended Uses
Primary Use Cases
Phi-4-mini-reasoning is designed for multi-step, logic-intensive mathematical problem-solving tasks under memory/compute constrained environments and latency bound scenarios. Some of the use cases include formal proof generation, symbolic computation, advanced word problems, and a wide range of mathematical reasoning scenarios. These models excel at maintaining context across steps, applying structured logic, and delivering accurate, reliable solutions in domains that require deep analytical thinking.
Use Case Considerations
This model is designed and tested for math reasoning only. It is not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models, as well as performance difference across languages, as they select use cases, and evaluate and mitigate for accuracy, safety, and fairness before using within a specific downstream use case, particularly for high-risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including but not limited to privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
Release Notes
This release of Phi-4-mini-reasoning addresses user feedback and market demand for a compact reasoning model. It is a compact transformer-based language model optimized for mathematical reasoning, built to deliver high-quality, step-by-step problem solving in environments where computing or latency is constrained. The model is fine-tuned with synthetic math data from a more capable model (much larger, smarter, more accurate, and better at following instructions), which has resulted in enhanced reasoning performance. Phi-4-mini-reasoning balances reasoning ability with efficiency, making it potentially suitable for educational applications, embedded tutoring, and lightweight deployment on edge or mobile systems.
If a critical issue is identified with Phi-4-mini-reasoning, it should be promptly reported through the MSRC Researcher Portal or secure@microsoft.com
Model Quality
To understand the capabilities, the 3.8B parameters Phi-4-mini-reasoning model was compared with a set of models over a variety of reasoning benchmarks. A high-level overview of the model quality is as follows:
Model | AIME | MATH-500 | GPQA Diamond |
---|---|---|---|
o1-mini* | 63.6 | 90.0 | 60.0 |
DeepSeek-R1-Distill-Qwen-7B | 53.3 | 91.4 | 49.5 |
DeepSeek-R1-Distill-Llama-8B | 43.3 | 86.9 | 47.3 |
Bespoke-Stratos-7B* | 20.0 | 82.0 | 37.8 |
OpenThinker-7B* | 31.3 | 83.0 | 42.4 |
Llama-3.2-3B-Instruct | 6.7 | 44.4 | 25.3 |
Phi-4-Mini (base model, 3.8B) | 10.0 | 71.8 | 36.9 |
Phi-4-mini-reasoning (3.8B) | 57.5 | 94.6 | 52.0 |
Overall, the model with only 3.8B-param achieves a similar level of multilingual language understanding and reasoning ability as much larger models. However, it is still fundamentally limited by its size for certain tasks. The model simply does not have the capacity to store too much factual knowledge, therefore, users may experience factual incorrectness. However, it may be possible to resolve such weakness by augmenting Phi-4 with a search engine, particularly when using the model under RAG settings.
Usage
Tokenizer
Phi-4-mini-reasoning supports a vocabulary size of up to 200064
tokens. The tokenizer files already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size.
Input Formats
Given the nature of the training data, the Phi-4-mini-instruct model is best suited for prompts using specific formats. Below are the two primary formats:
Chat format
This format is used for general conversation and instructions:
<|system|>Your name is Phi, an AI math expert developed by Microsoft.<|end|><|user|>How to solve 3*x^2+4*x+5=1?<|end|><|assistant|>
Training
Model
- Architecture: Phi-4-mini-reasoning shares the same architecture as Phi-4-Mini, which has 3.8B parameters and is a dense decoder-only Transformer model. When compared with Phi-3.5-Mini, the major changes with Phi-4-Mini are 200K vocabulary, grouped-query attention, and shared input and output embedding.
- Inputs: Text. It is best suited for prompts using the chat format.
- Context length: 128K tokens
- GPUs: 128 H100-80G
- Training time: 2 days
- Training data: 150B tokens
- Outputs: Generated text
- Dates: Trained in February 2024
- Status: This is a static model trained on offline datasets with the cutoff date of February 2025 for publicly available data.
- Supported languages: English
- Release date: April 2025
Training Datasets
The training dataset details were not fully provided in the original README.
đ§ Technical Details
Model Generation
The model was generated using llama.cpp at commit 19e899c
.
Ultra-Low-Bit Quantization
The ultra-low-bit quantization method with IQ-DynamicGate (1-2 bit) uses precision-adaptive quantization and layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency. It has been benchmarked on Llama-3-8B and shows significant improvements in perplexity and error propagation reduction.
Model Architecture
Phi-4-mini-reasoning shares the same architecture as Phi-4-Mini, which is a dense decoder-only Transformer model with 3.8B parameters. The major changes compared to Phi-3.5-Mini include a 200K vocabulary, grouped-query attention, and shared input and output embedding.
đ License
This project is licensed under the MIT License. You can find the full license text here.

