P

Phi 4 Mini Reasoning GGUF

Developed by Mungert
Phi-4-mini-reasoning is a lightweight open model built on synthetic data, focusing on high-quality, reasoning-rich data, and further fine-tuned for more advanced mathematical reasoning capabilities.
Downloads 3,592
Release Time : 5/2/2025

Model Overview

Phi-4-mini-reasoning is a lightweight language model that focuses on mathematical reasoning tasks, supports a context length of 128K tokens, and is suitable for environments with limited computing or latency.

Model Features

Ultra-low bit quantization
Introduces a precision-adaptive quantization method suitable for ultra-low bit models (1 - 2 bits), retaining accuracy while maintaining extremely high memory efficiency.
Multiple model formats
Provides multiple formats such as BF16, F16, and quantized models, which can be selected according to hardware capabilities and memory limitations.
Support for long context
Supports a context length of 128K tokens, enabling better handling of long texts.
Strong mathematical reasoning ability
After fine-tuning, it performs excellently in mathematical reasoning tasks and is suitable for various mathematical reasoning scenarios such as formal proof generation and symbolic computation.

Model Capabilities

Mathematical reasoning
Formal proof generation
Symbolic computation
Advanced word problem solving
Multi-step logical reasoning

Use Cases

Education
Mathematics tutoring
Used in embedded tutoring systems to help students solve complex mathematical problems.
Performs excellently in mathematical reasoning tasks
Edge computing
Edge device deployment
Deployed on edge or mobile systems with limited memory to provide lightweight mathematical reasoning capabilities.
Suitable for low-memory environments
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase