đ MathCoder-VL: Bridging Vision and Code for Enhanced Multimodal Mathematical Reasoning
MathCoder-VL is a series of open - source large multimodal models designed for general math problem - solving, bridging vision and code to enhance multimodal mathematical reasoning.
Repo: https://github.com/mathllm/MathCoder
Paper: https://huggingface.co/papers/2505.10557
đ Quick Start
This section provides a brief overview of the project and how to get started.
Model Information
Property |
Details |
Model Type |
Image - text - to - text |
Metrics |
Accuracy |
Library Name |
transformers |
Base Model |
OpenGVLab/InternVL2 - 8B |
Datasets |
MathLLMs/MM - MathInstruct |
License |
apache - 2.0 |
Tags |
mathematics, reasoning, multi - modal - qa, math - qa, figure - qa, geometry - qa, math - word - problem, textbook - qa, vqa, geometry - diagram, synthetic - scene, chart, plot, scientific - figure, table, function - plot, abstract - scene, puzzle - test, document - image, science |
⨠Features
We introduce MathCoder - VL, a series of open - source large multimodal models (LMMs) specifically tailored for general math problem - solving. We also introduce FigCodifier - 8B, an image - to - code model.
Base Model |
Ours |
[Mini - InternVL - Chat - 2B - V1 - 5](https://huggingface.co/OpenGVLab/Mini - InternVL - Chat - 2B - V1 - 5) |
[MathCoder - VL - 2B](https://huggingface.co/MathLLMs/MathCoder - VL - 2B) |
[InternVL2 - 8B](https://huggingface.co/OpenGVLab/InternVL2 - 8B) |
[MathCoder - VL - 8B](https://huggingface.co/MathLLMs/MathCoder - VL - 8B) |
[InternVL2 - 8B](https://huggingface.co/OpenGVLab/InternVL2 - 8B) |
FigCodifier - 8B |
đģ Usage Examples
Basic Usage
from datasets import load_dataset
from PIL import Image
from io import BytesIO
mm_mathinstruct = load_dataset("MathLLMs/MM-MathInstruct")
print(mm_mathinstruct)
img = Image.open(BytesIO(mm_mathinstruct['train'][-1]['image']))
img.show()
It should print:
DatasetDict({
train: Dataset({
features: ['id', 'image', 'question', 'solution', 'image_path'],
num_rows: 2871988
})
})
đ Documentation
Motivation
Construction of FigCodifier
Construction of MathCoder - VL
Performance
đ License
This project is licensed under the apache - 2.0 license.
đ Citation
Please cite the paper if you use our data, model or code.
@inproceedings{
wang2025mathcodervl,
title={MathCoder-{VL}: Bridging Vision and Code for Enhanced Multimodal Mathematical Reasoning},
author={Ke Wang and Junting Pan and Linda Wei and Aojun Zhou and Weikang Shi and Zimu Lu and Han Xiao and Yunqiao Yang and Houxing Ren and Mingjie Zhan and Hongsheng Li},
booktitle={The 63rd Annual Meeting of the Association for Computational Linguistics},
year={2025},
url={https://openreview.net/forum?id=nuvtX1imAb}
}
@inproceedings{
lu2025mathcoder2,
title={MathCoder2: Better Math Reasoning from Continued Pretraining on Model-translated Mathematical Code},
author={Zimu Lu and Aojun Zhou and Ke Wang and Houxing Ren and Weikang Shi and Junting Pan and Mingjie Zhan and Hongsheng Li},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=1Iuw1jcIrf}
}
@inproceedings{
wang2024mathcoder,
title={MathCoder: Seamless Code Integration in {LLM}s for Enhanced Mathematical Reasoning},
author={Ke Wang and Houxing Ren and Aojun Zhou and Zimu Lu and Sichun Luo and Weikang Shi and Renrui Zhang and Linqi Song and Mingjie Zhan and Hongsheng Li},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=z8TW0ttBPp}
}