M

MMR1 Math V0 7B

Developed by MMR1
A large multimodal model focused on mathematical tasks, achieving state-of-the-art performance among open-source 7B multimodal models
Downloads 75
Release Time : 3/11/2025

Model Overview

MMR1-Math-v0-7B is a multimodal large model built upon Qwen2.5-VL-7B-Instruct, specializing in mathematical reasoning tasks. The model achieves SOTA performance with only 6k carefully selected training samples and excels in multiple mathematical reasoning benchmarks.

Model Features

SOTA Performance
Sets a new benchmark for mathematical tasks among open-source 7B multimodal models
Efficient Training
Achieves top-tier performance with only 6k high-quality samples and 6 hours of RL training
Data Strategy
High-quality public data uniformly sampled based on task difficulty and mathematical reasoning diversity
GRPO Training
Efficient RL training using 64 H100 GPUs (15 epochs)

Model Capabilities

Multimodal Mathematical Reasoning
Image-Text Understanding
Complex Mathematical Problem Solving
Logical Reasoning

Use Cases

Education
Math Problem Solving
Helps students understand and solve complex math problems
Achieves 71.0 points on benchmarks like MathVista
Research
Multimodal Reasoning Research
Provides a benchmark model for the field of multimodal reasoning
Outperforms peer models on multiple mathematical reasoning benchmarks
Featured Recommended AI Models
ยฉ 2025AIbase