🚀 Kimina-Prover-Distill-8B
AI-MO/Kimina-Prover-Distill-8B is a theorem proving model developed by Project Numina and Kimi teams. It specializes in competition - style problem - solving capabilities within Lean 4. This model is a distilled version of Kimina-Prover-72B, which was trained through large - scale reinforcement learning. It attains an accuracy of 77.86% with Pass@32 on MiniF2F - test.

🚀 Quick Start
📦 Installation
The README doesn't have specific installation steps, so this section is skipped.
💻 Usage Examples
Basic Usage
You can easily do inference using vLLM:
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_name = "AI-MO/Kimina-Prover-Distill-8B"
model = LLM(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
problem = "The volume of a cone is given by the formula $V = \frac{1}{3}Bh$, where $B$ is the area of the base and $h$ is the height. The area of the base of a cone is 30 square units, and its height is 6.5 units. What is the number of cubic units in its volume?"
formal_statement = """import Mathlib
import Aesop
set_option maxHeartbeats 0
open BigOperators Real Nat Topology Rat
/-- The volume of a cone is given by the formula $V = \frac{1}{3}Bh$, where $B$ is the area of the base and $h$ is the height. The area of the base of a cone is 30 square units, and its height is 6.5 units. What is the number of cubic units in its volume? Show that it is 65.-/
theorem mathd_algebra_478 (b h v : ℝ) (h₀ : 0 < b ∧ 0 < h ∧ 0 < v) (h₁ : v = 1 / 3 * (b * h))
(h₂ : b = 30) (h₃ : h = 13 / 2) : v = 65 := by
"""
prompt = "Think about and solve the following problem step by step in Lean 4."
prompt += f"\n# Problem:{problem}"""
prompt += f"\n# Formal statement:\n```lean4\n{formal_statement}\n```\n"
messages = [
{"role": "system", "content": "You are an expert in mathematics and Lean 4."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
sampling_params = SamplingParams(temperature=0.6, top_p=0.95, max_tokens=8096)
output = model.generate(text, sampling_params=sampling_params)
output_text = output[0].outputs[0].text
print(output_text)
Advanced Usage
For advanced usage examples, see Advanced Usage Examples.
📚 Documentation
Model Information
Property |
Details |
Base Model |
Qwen/Qwen3 - 8B |
Tags |
chat |
Library Name |
transformers |
License |
apache - 2.0 |
📄 License
This project is licensed under the Apache 2.0 License.