Doubao 1.5 Pro 32k
D

Doubao 1.5 Pro 32k

豆包大モデル1.5 Proは、バイトドゥーが発表した第3世代の大規模言語モデルです。大規模な疎なMoEアーキテクチャを採用しており、活性化パラメータが7倍のDenseモデルと同等の性能を備えています。強力な多言語処理能力を持ち、マルチモーダル入力に対応しており、様々なタスクシナリオに適用できます。
Intelligence(Weak)
Speed(Slow)
Input Supported Modalities
Yes
Is Reasoning Model
32,000
Context Window
32,000
Maximum Output Tokens
-
Knowledge Cutoff

Pricing

¥0.8 /M tokens
Input
¥0.2 /M tokens
Output
¥0.5 /M tokens
Blended Price

Quick Simple Comparison

Input

Output

Doubao-1.5-pro-256k
¥0.69
Doubao-1.5-thinking-pro
¥0.56
Doubao-1.5-vision-lite
¥0.21

Basic Parameters

Doubao-1.5-pro-32kTechnical Parameters
Parameter Count
Not Announced
Context Length
32.00k tokens
Training Data Cutoff
Open Source Category
Proprietary
Multimodal Support
Text, Image
Throughput
0
Release Date
2025-01-22
Response Speed
44.82 tokens/s

Benchmark Scores

Below is the performance of Doubao-1.5-pro-32k in various standard benchmark tests. These tests evaluate the model's capabilities in different tasks and domains.
Intelligence Index
-
Large Language Model Intelligence Level
Coding Index
70.2
Indicator of AI model performance on coding tasks
Math Index
64.7
Capability indicator in solving mathematical problems, mathematical reasoning, or performing math-related tasks
MMLU Pro
80.1
Massive Multitask Multimodal Understanding - Testing understanding of text, images, audio, and video
GPQA
-
Graduate Physics Questions Assessment - Testing advanced physics knowledge with diamond science-level questions
HLE
-
The model's comprehensive average score on the Hugging Face Open LLM Leaderboard
LiveCodeBench
-
Specific evaluation focused on assessing large language models' ability in real-world code writing and solving programming competition problems
SciCode
-
The model's capability in code generation for scientific computing or specific scientific domains
HumanEval
-
Score achieved by the AI model on the specific HumanEval benchmark test set
Math 500 Score
-
Score on the first 500 larger, more well-known mathematical benchmark tests
AIME Score
-
An indicator measuring an AI model's ability to solve high-difficulty mathematical competition problems (specifically AIME level)
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase