
Doubao 1.5 Vision Lite
ビジョン理解のためのライトバージョンで、豆包1.5モデルファミリーの一部です。トークンコストが低く、レイテンシーが改善されており、視覚的推論と細粒度の情報理解能力を備えています。画像の処理とテキストベースの分析および説明の生成をサポートしています。
Intelligence(Weak)
Speed(Slow)
Input Supported Modalities
Yes
Is Reasoning Model
128,000
Context Window
0
Maximum Output Tokens
-
Knowledge Cutoff
Pricing
¥1.5 /M tokens
Input
¥4.5 /M tokens
Output
¥3 /M tokens
Blended Price
Quick Simple Comparison
Doubao-1.5-pro-256k
¥0.69
Doubao-1.5-thinking-pro
¥0.56
Doubao-1.5-vision-lite
¥0.21
Basic Parameters
Doubao-1.5-vision-liteTechnical Parameters
Parameter Count
Not Announced
Context Length
128.00k tokens
Training Data Cutoff
Open Source Category
Proprietary
Multimodal Support
Text, Image
Throughput
0
Release Date
2024-12-18
Response Speed
0 tokens/s
Benchmark Scores
Below is the performance of Doubao-1.5-vision-lite in various standard benchmark tests. These tests evaluate the model's capabilities in different tasks and domains.
Intelligence Index
-
Large Language Model Intelligence Level
Coding Index
-
Indicator of AI model performance on coding tasks
Math Index
0.37
Capability indicator in solving mathematical problems, mathematical reasoning, or performing math-related tasks
MMLU Pro
-
Massive Multitask Multimodal Understanding - Testing understanding of text, images, audio, and video
GPQA
-
Graduate Physics Questions Assessment - Testing advanced physics knowledge with diamond science-level questions
HLE
-
The model's comprehensive average score on the Hugging Face Open LLM Leaderboard
LiveCodeBench
-
Specific evaluation focused on assessing large language models' ability in real-world code writing and solving programming competition problems
SciCode
-
The model's capability in code generation for scientific computing or specific scientific domains
HumanEval
-
Score achieved by the AI model on the specific HumanEval benchmark test set
Math 500 Score
-
Score on the first 500 larger, more well-known mathematical benchmark tests
AIME Score
-
An indicator measuring an AI model's ability to solve high-difficulty mathematical competition problems (specifically AIME level)
GPT 5 Mini
openai

¥1.8
Input tokens/million
¥14.4
Output tokens/million
400k
Context Length
GPT 5 Standard
openai

¥63
Input tokens/million
¥504
Output tokens/million
400k
Context Length
GPT 5 Nano
openai

¥0.36
Input tokens/million
¥2.88
Output tokens/million
400k
Context Length
GPT 5
openai

¥9
Input tokens/million
¥72
Output tokens/million
400k
Context Length
GLM 4.5
chatglm

¥0.43
Input tokens/million
¥1.01
Output tokens/million
131k
Context Length
Gemini 1.0 Pro
google

¥3.6
Input tokens/million
¥10.8
Output tokens/million
33k
Context Length
Gemini 2.0 Flash Lite (Preview)
google

¥0.58
Input tokens/million
¥2.16
Output tokens/million
1M
Context Length
GPT 4
openai

¥216
Input tokens/million
¥432
Output tokens/million
8192
Context Length