P

Phi Mini MoE Instruct GGUF

Developed by gabriellarson
Phi-mini-MoE is a lightweight Mixture of Experts (MoE) model suitable for English business and research scenarios, excelling in resource-constrained environments and low-latency scenarios.
Downloads 2,458
Release Time : 6/24/2025

Model Overview

Phi-mini-MoE is a lightweight Mixture of Experts model compressed and distilled from the Phi-3.5-MoE and GRIN-MoE base models using the SlimMoE method, suitable for general AI systems and resource-constrained environments.

Model Features

Lightweight design
With a total of 7.6 billion parameters and 2.4 billion active parameters, it is suitable for resource-constrained environments.
Efficient compression and distillation
Compressed and distilled from the Phi-3.5-MoE and GRIN-MoE base models using the SlimMoE method.
Multi-scenario applicability
Suitable for general AI systems and scenarios with memory, computational resource constraints, and low-latency requirements.
High-quality training data
The training data contains 400 billion tokens, including high-quality public documents, synthetic educational data, and chat-format supervised data.

Model Capabilities

Text generation
Instruction following
Mathematical reasoning
Code generation
Common sense reasoning

Use Cases

Business applications
Customer service assistant
Used to handle customer inquiries and provide support.
Provides quick responses in low-latency environments.
Research
Academic research assistance
Helps researchers generate and organize research content.
Provides high-quality text generation and reasoning support.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase