Hunyuan T1 20250403
H

Hunyuan T1 20250403

A deep inference optimization model developed by Tencent, based on the Hybrid-Transformer-Mamba MoE architecture, which enhances the ability to handle mathematical logic and complex tasks.
Intelligence(Medium)
Speed(Slow)
Input Supported Modalities
No
Is Reasoning Model
64,000
Context Window
64,000
Maximum Output Tokens
2024-12-31
Knowledge Cutoff

Pricing

¥1 /M tokens
Input
¥4 /M tokens
Output
¥3 /M tokens
Blended Price

Quick Simple Comparison

Input

Output

Hunyuan-T1-20250403
¥0.14
Hunyuan-Vision
¥2.5
HunYuan-TurboS
¥0.11

Basic Parameters

Hunyuan-T1-20250403Technical Parameters
Parameter Count
Not Announced
Context Length
64.00k tokens
Training Data Cutoff
2024-12-31
Open Source Category
Proprietary
Multimodal Support
Text Only
Throughput
1,580
Release Date
2025-04-03
Response Speed
21.4 tokens/s

Benchmark Scores

Below is the performance of Hunyuan-T1-20250403 in various standard benchmark tests. These tests evaluate the model's capabilities in different tasks and domains.
Intelligence Index
52.1
Large Language Model Intelligence Level
Coding Index
71.6
Indicator of AI model performance on coding tasks
Math Index
78.9
Capability indicator in solving mathematical problems, mathematical reasoning, or performing math-related tasks
MMLU Pro
82.9
Massive Multitask Multimodal Understanding - Testing understanding of text, images, audio, and video
GPQA
-
Graduate Physics Questions Assessment - Testing advanced physics knowledge with diamond science-level questions
HLE
-
The model's comprehensive average score on the Hugging Face Open LLM Leaderboard
LiveCodeBench
6490
Specific evaluation focused on assessing large language models' ability in real-world code writing and solving programming competition problems
SciCode
-
The model's capability in code generation for scientific computing or specific scientific domains
HumanEval
76.8
Score achieved by the AI model on the specific HumanEval benchmark test set
Math 500 Score
9620
Score on the first 500 larger, more well-known mathematical benchmark tests
AIME Score
7820
An indicator measuring an AI model's ability to solve high-difficulty mathematical competition problems (specifically AIME level)
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase