🚀 QuantFactory/AceReason-Nemotron-7B-GGUF
This is a quantized version of nvidia/AceReason-Nemotron-7B created using llama.cpp, which offers efficient performance for math and code reasoning.

🚀 Quick Start
This is a quantized version of nvidia/AceReason-Nemotron-7B created using llama.cpp.
✨ Features
- Reinforcement Learning Training: AceReason-Nemotron-7B is trained entirely through reinforcement learning (RL), starting from the DeepSeek-R1-Distilled-Qwen-7B, achieving excellent results in math and code reasoning tasks.
- Impressive Performance: It achieves 69.0% on AIME 2024 (+14.5%), 53.6% on AIME 2025 (+17.4%), 51.8% on LiveCodeBench v5 (+8%), 44.1% on LiveCodeBench v6 (+7%).
- Systematic RL Approach: The model adopts a two - step RL training approach: first on math - only prompts, then on code - only prompts, which significantly enhances both math and code reasoning capabilities.
📚 Documentation
Original Model Card
AceReason-Nemotron: Advancing Math and Code Reasoning through Reinforcement Learning
[](https://arxiv.org/abs/2505.16400)
[](https://huggingface.co/datasets/nvidia/AceReason-Math)
[](https://huggingface.co/collections/nvidia/acereason-682f4e1261dc22f697fd1485)
[](https://huggingface.co/nvidia/AceReason-Nemotron-14B/blob/main/README_EVALUATION.md)
📢 News
- 6/11/2025: We share our evaluation toolkit at AceReason Evaluation including:
- scripts to run inference and scoring
- LiveCodeBench (avg@8): model prediction files and scores for each month (2023/5 - 2025/5)
- AIME24/25 (avg@64): model prediction files and scores
- 6/2/2025: We are excited to share our Math RL training dataset at AceReason - Math
Results
We evaluate our model against competitive reasoning models of comparable size within Qwen2.5 and Llama3.1 model family on AIME 2024, AIME 2025, LiveCodeBench v5 (2024/08/01 - 2025/02/01), and LiveCodeBench v6 (2025/02/01 - 2025/05/01). More evaluation results can be found in our technical report.
Model |
AIME 2024 (avg@64) |
AIME 2025 (avg@64) |
LCB v5 (avg@8) |
LCB v6 (avg@8) |
QwQ - 32B |
79.5 |
65.8 |
63.4 |
- |
DeepSeek - R1 - 671B |
79.8 |
70.0 |
65.9 |
- |
Llama - Nemotron - Ultra - 253B |
80.8 |
72.5 |
66.3 |
- |
o3 - mini (medium) |
79.6 |
76.7 |
67.4 |
- |
Light - R1 - 7B |
59.1 |
44.3 |
40.6 |
36.4 |
Light - R1 - 14B |
74 |
60.2 |
57.9 |
51.5 |
DeepCoder - 14B (32K Inference) |
71 |
56.1 |
57.9 |
50.4 |
OpenMath - Nemotron - 7B |
74.8 |
61.2 |
- |
- |
OpenCodeReasoning - Nemotron - 7B |
- |
- |
51.3 |
46.1 |
Llama - Nemotron - Nano - 8B - v1 |
61.3 |
47.1 |
46.6 |
46.2 |
DeepSeek - R1 - Distilled - Qwen - 7B |
55.5 |
39.0 |
37.6 |
34.1 |
DeepSeek - R1 - Distilled - Qwen - 14B |
69.7 |
50.2 |
53.1 |
47.9 |
DeepSeek - R1 - Distilled - Qwen - 32B |
72.6 |
54.9 |
57.2 |
- |
AceReason - Nemotron - 7B 🤖 |
69.0 |
53.6 |
51.8 |
44.1 |
AceReason - Nemotron - 14B 🤖 |
78.6 |
67.4 |
61.1 |
54.9 |
💻 Usage Examples
Basic Usage
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = 'nvidia/AceReason-Nemotron-7B'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto")
prompt = "Jen enters a lottery by picking $4$ distinct numbers from $S=\\{1,2,3,\\cdots,9,10\\}.$ $4$ numbers are randomly chosen from $S.$ She wins a prize if at least two of her numbers were $2$ of the randomly chosen numbers, and wins the grand prize if all four of her numbers were the randomly chosen numbers. The probability of her winning the grand prize given that she won a prize is $\\tfrac{m}{n}$ where $m$ and $n$ are relatively prime positive integers. Find $m+n$."
messages = [{"role": "user", "content": prompt}]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to("cuda")
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768,
temperature=0.6,
top_p=0.95
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
Advanced Usage
The advanced usage mainly involves different prompt settings for different types of questions.
question = ""
starter_code = ""
code_instruction_nostartercode = """Write Python code to solve the problem. Please place the solution code in the following format:\n```python\n# Your solution code here\n```"""
code_instruction_hasstartercode = """Please place the solution code in the following format:\n```python\n# Your solution code here\n```"""
if starter_code != "":
question += "\n\n" + "Solve the problem starting with the provided function header.\n\nFunction header:\n" + "```\n" + starter_code + "\n```"
question += "\n\n" + code_instruction_hasstartercode
else:
question += "\n\n" + code_instruction_nostartercode
final_prompt = "<|User|>" + question + "<|Assistant|><think>\n"
💡 Usage Tip
- Don't include a system prompt; instead, place all instructions directly in the user prompt.
- For math questions, use the instruction: Please reason step by step, and put your final answer within \boxed{}.
- For code questions, follow the code instruction generation process as shown in the advanced usage example.
- Our inference engine for evaluation is vLLM==0.7.3 using top - p = 0.95, temperature = 0.6, max_tokens = 32768.
Evaluation Toolkit
Please check evaluation code, scripts, cached prediction files in AceReason Evaluation
Correspondence to
Yang Chen (yachen@nvidia.com), Zhuolin Yang (zhuoliny@nvidia.com), Zihan Liu (zihanl@nvidia.com), Chankyu Lee (chankyul@nvidia.com), Wei Ping (wping@nvidia.com)
📄 License
Your use of this model is governed by the NVIDIA Open Model License.
Citation
@article{chen2025acereason,
title={AceReason-Nemotron: Advancing Math and Code Reasoning through Reinforcement Learning},
author={Chen, Yang and Yang, Zhuolin and Liu, Zihan and Lee, Chankyu and Xu, Peng and Shoeybi, Mohammad and Catanzaro, Bryan and Ping, Wei},
journal={arXiv preprint arXiv:2505.16400},
year={2025}
}