# Mathematical problem solving
The Teacher
A language model fine-tuned based on Qwen3-1.7B, which improves mathematical reasoning ability through reinforcement learning technology
Large Language Model
Safetensors English
T
shiviktech
824
0
Openr1 Qwen 7B Turkish
Apache-2.0
A 7B-parameter large language model fine-tuned on Turkish datasets based on Qwen2.5-Instruct, specializing in mathematical reasoning and step-by-step thinking capabilities
Large Language Model
Transformers

O
WiroAI
319
21
Deepseek R1 Distill Qwen 7B Japanese
Apache-2.0
This is the Japanese version of the DeepSeek R1 model, specifically fine-tuned for Japanese reasoning tasks and capable of reliably and accurately responding to prompts in Japanese.
Large Language Model
Transformers Japanese

D
lightblue
1,067
30
Phi 3 Small 128k Instruct
MIT
Phi-3-Small-128K-Instruct is a 7-billion-parameter lightweight open-source model focused on high quality and strong reasoning capabilities, supporting 128K long context, and excelling in tasks such as commonsense reasoning, language understanding, mathematics, and coding.
Large Language Model
Transformers Other

P
microsoft
7,194
176
Code Llama 3 8B
A code generation and mathematical problem-solving model trained on Llama-3-8B, supporting multiple programming languages and detailed code explanations
Large Language Model
Transformers Supports Multiple Languages

C
ajibawa-2023
55
30
Noon 7b
Openrail
Noon is a 7 billion parameter Arabic large language model based on the BLOOM architecture, specifically designed for instruction fine-tuning. It supports tasks such as text generation, code generation, mathematical problem solving, and Q&A.
Large Language Model
Transformers Supports Multiple Languages

N
Naseej
200
45
Featured Recommended AI Models