F

Fairyr1 32B GGUF

Developed by Mungert
FairyR1-32B is an efficient large language model developed by Peking University DS-LAB, based on DeepSeek-R1-Distill-Qwen-32B. It achieves a balance between high performance and low-cost inference through an innovative 'distillation-fusion' process.
Downloads 867
Release Time : 5/25/2025

Model Overview

FairyR1-32B is a 32B-parameter text generation model focused on mathematical and programming tasks, achieving or surpassing the performance of larger models on specific tasks with approximately 5% of the parameters.

Model Features

IQ-DynamicGate Ultra-Low-Bit Quantization
Utilizes precision-adaptive quantization technology, specifically designed for ultra-low-bit models (1-2 bits), ensuring extreme memory efficiency while maintaining accuracy.
Distillation-Fusion Innovation Process
Achieves a balance between high performance and low-cost inference through task fine-tuning and model fusion techniques.
Math and Programming Optimization
Specially optimized for performance in mathematics and programming, excelling in evaluations such as AIME and LiveCodeBench.

Model Capabilities

Text Generation
Mathematical Problem Solving
Programming Task Processing
Scientific QA

Use Cases

Education
Mathematical Problem Solving
Solving competition-level math problems such as AIME
Performed excellently in AIME 2024 and 2025 evaluations
Programming
Code Generation and Understanding
Handling programming-related tasks
Performed excellently in LiveCodeBench evaluations
Network Monitoring
AI Network Monitoring Assistant
Used for network monitoring tasks and quantum security detection
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase