A

AM Thinking V1

Developed by a-m-team
A 32-billion-parameter dense language model focused on enhancing reasoning capabilities, built upon Qwen 2.5-32B-Base, demonstrating performance comparable to larger MoE models in reasoning benchmarks.
Downloads 1,377
Release Time : 5/10/2025

Model Overview

AM-Thinking-v1 is a 32-billion-parameter dense language model dedicated to enhancing reasoning abilities. Constructed on Qwen 2.5-32B-Base, it achieves flagship-level reasoning performance through meticulously designed training processes.

Model Features

High-performance reasoning capability
Demonstrates performance comparable to larger MoE models like DeepSeek-R1 and Qwen3-235B-A22B in reasoning benchmarks.
Single-card deployment
Deployable on a single A100-80GB GPU with deterministic latency, eliminating MoE routing overhead.
Built with open-source components
Fully constructed using open-source components, including Qwen 2.5-32B-Base and reinforcement learning training data.
Meticulously designed training process
Achieves flagship-level reasoning capabilities through supervised fine-tuning + dual-stage reinforcement learning training.

Model Capabilities

Text generation
Complex reasoning
Code generation

Use Cases

Reasoning tasks
Mathematical reasoning
Solving complex mathematical problems
Outstanding performance on AIME'24/'25 benchmarks
Code generation
Generating high-quality code
Surpasses DeepSeek-R1 on LiveCodeBench
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase