
Qwq 32B
人工知能の推論能力の向上に特化したモデルで、特に数学とプログラミングの分野で優れた性能を発揮します。深い自己反省と自問能力を備えていますが、言語の混合や再帰/無限推論モードには一定の制限があります。
Intelligence(Medium)
Speed(Relatively Slow)
Input Supported Modalities
No
Is Reasoning Model
131,072
Context Window
32,768
Maximum Output Tokens
2024-11-28
Knowledge Cutoff
Pricing
- /M tokens
Input
- /M tokens
Output
¥3.51 /M tokens
Blended Price
Quick Simple Comparison
Qwen Turbo
Qwen2.5 Turbo
Qwen-Plus-Latest
¥0.11
Basic Parameters
QwQ 32BTechnical Parameters
Parameter Count
32,500.0M
Context Length
131.07k tokens
Training Data Cutoff
2024-11-28
Open Source Category
Open Weights (Permissive License)
Multimodal Support
Text Only
Throughput
0
Release Date
2025-03-05
Response Speed
79.728,935 tokens/s
Benchmark Scores
Below is the performance of QwQ 32B in various standard benchmark tests. These tests evaluate the model's capabilities in different tasks and domains.
Intelligence Index
58.06
Large Language Model Intelligence Level
Coding Index
49.42
Indicator of AI model performance on coding tasks
Math Index
86.87
Capability indicator in solving mathematical problems, mathematical reasoning, or performing math-related tasks
MMLU Pro
76.4
Massive Multitask Multimodal Understanding - Testing understanding of text, images, audio, and video
GPQA
59.3
Graduate Physics Questions Assessment - Testing advanced physics knowledge with diamond science-level questions
HLE
8.2
The model's comprehensive average score on the Hugging Face Open LLM Leaderboard
LiveCodeBench
63.1
Specific evaluation focused on assessing large language models' ability in real-world code writing and solving programming competition problems
SciCode
35.8
The model's capability in code generation for scientific computing or specific scientific domains
HumanEval
97.6
Score achieved by the AI model on the specific HumanEval benchmark test set
Math 500 Score
95.7
Score on the first 500 larger, more well-known mathematical benchmark tests
AIME Score
78
An indicator measuring an AI model's ability to solve high-difficulty mathematical competition problems (specifically AIME level)
GPT 5 Mini
openai

¥1.8
Input tokens/million
¥14.4
Output tokens/million
400k
Context Length
GPT 5 Standard
openai

¥63
Input tokens/million
¥504
Output tokens/million
400k
Context Length
GPT 5 Nano
openai

¥0.36
Input tokens/million
¥2.88
Output tokens/million
400k
Context Length
GPT 5
openai

¥9
Input tokens/million
¥72
Output tokens/million
400k
Context Length
GLM 4.5
chatglm

¥0.43
Input tokens/million
¥1.01
Output tokens/million
131k
Context Length
Gemini 1.0 Pro
google

¥3.6
Input tokens/million
¥10.8
Output tokens/million
33k
Context Length
Gemini 2.0 Flash Lite (Preview)
google

¥0.58
Input tokens/million
¥2.16
Output tokens/million
1M
Context Length
GPT 4
openai

¥216
Input tokens/million
¥432
Output tokens/million
8192
Context Length