F

Fathom R1 14B

Developed by FractalAIResearch
A 14B-parameter math reasoning model trained at a cost of $499, achieving performance comparable to closed-source o4-mini under a 16K context window
Downloads 2,112
Release Time : 5/13/2025

Model Overview

A 14B-parameter reasoning model based on Deepseek-R1-Distilled-Qwen-14B, achieving SOTA math reasoning capabilities within a strict 16K context limit through innovative training methods

Model Features

Low-cost efficient training
Achieved performance comparable to closed-source o4-mini with only $499 training cost
16K context limitation
Optimized performance under strict 16K context window, avoiding reliability issues with excessively long reasoning chains
Iterative curriculum learning
Adopted multi-round curriculum learning strategy to progressively improve model performance on math problems of varying difficulty
Reasoning chain compression
Through RL training, the model generates more concise and effective reasoning steps

Model Capabilities

Advanced mathematical reasoning
Olympiad math problem solving
Step-by-step solutions for complex problems
Cross-domain knowledge application

Use Cases

Education
Olympiad math tutoring
Used for solving and tutoring problems in math competitions like AIME/HMMT
Achieved 52.71% Pass@1 accuracy on AIME2025
Math education assistance
Helps students understand step-by-step reasoning processes for complex math concepts
Research
Reasoning model research
Serves as a benchmark for studying low-cost, high-efficiency reasoning models
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase