đ MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical Reasoning
MathCoder is a series of open - source large language models designed for general math problem - solving, integrating code seamlessly to enhance mathematical reasoning.
đ Quick Start
You can use the MathCoder models through Huggingface's Transformers library. Employ the pipeline function to create a text - generation pipeline with the model of your choice, then input a math problem to obtain the solution. For more details, check our Github repo.
⨠Features
- Tailored for Math: Specifically designed for general math problem - solving.
- Multiple Model Options: Based on different base models (Llama - 2 and Code Llama), offering various model sizes (7B, 13B, 34B) to meet different needs.
đĻ Installation
The README does not provide specific installation steps, so this section is skipped.
đģ Usage Examples
The README does not contain code examples, so this section is skipped.
đ Documentation
Introduction
We introduce MathCoder, a series of open - source large language models (LLMs) specifically tailored for general math problem - solving.
Base Model: Llama - 2 |
Base Model: Code Llama |
[MathCoder - L - 7B](https://huggingface.co/MathLLM/MathCoder - L - 7B) |
[MathCoder - CL - 7B](https://huggingface.co/MathLLM/MathCoder - CL - 7B) |
[MathCoder - L - 13B](https://huggingface.co/MathLLM/MathCoder - L - 13B) |
[MathCoder - CL - 34B](https://huggingface.co/MathLLM/MathCoder - CL - 34B) |
Training Data
The models are trained on the MathCodeInstruct Dataset.
Training Procedure
The models are fine - tuned with the MathCodeInstruct dataset using the original Llama - 2 and CodeLlama models as base models. For more details, check out our paper and repo.
Evaluation
đ§ Technical Details
The README does not provide in - depth technical details, so this section is skipped.
đ License
The models are released under the [Apache - 2.0](https://www.apache.org/licenses/LICENSE - 2.0) license.
đ Citation
Please cite the paper if you use our data, model or code. Please also kindly cite the original dataset papers.
@inproceedings{
wang2024mathcoder,
title={MathCoder: Seamless Code Integration in {LLM}s for Enhanced Mathematical Reasoning},
author={Ke Wang and Houxing Ren and Aojun Zhou and Zimu Lu and Sichun Luo and Weikang Shi and Renrui Zhang and Linqi Song and Mingjie Zhan and Hongsheng Li},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=z8TW0ttBPp}
}
@inproceedings{
zhou2024solving,
title={Solving Challenging Math Word Problems Using {GPT}-4 Code Interpreter with Code-based Self-Verification},
author={Aojun Zhou and Ke Wang and Zimu Lu and Weikang Shi and Sichun Luo and Zipeng Qin and Shaoqing Lu and Anya Jia and Linqi Song and Mingjie Zhan and Hongsheng Li},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=c8McWs4Av0}
}