Claude 3.5 Haiku
C

Claude 3.5 Haiku

Claude 3.5 HaikuはAnthropicが提供する最速のモデルで、手頃な価格で高度なコーディング、ツールの使用、推論機能を提供します。このモデルは、ユーザー向けの製品、専用のサブエージェントタスク、大量のデータからの個別化された体験の生成に優れています。特に、コード補完、対話型チャットボット、データ抽出、リアルタイムのコンテンツ審査に適しています。
Intelligence(Relatively Weak)
Speed(Relatively Slow)
Input Supported Modalities
No
Is Reasoning Model
200,000
Context Window
200,000
Maximum Output Tokens
-
Knowledge Cutoff

Pricing

¥5.76 /M tokens
Input
¥28.8 /M tokens
Output
¥11.52 /M tokens
Blended Price

Quick Simple Comparison

Claude 4 Opus (Extended Thinking)
¥15
Claude 3 Opus
¥15
Claude 4 Sonnet
¥3

Basic Parameters

Claude 3.5 HaikuTechnical Parameters
Parameter Count
Not Announced
Context Length
200.00k tokens
Training Data Cutoff
Open Source Category
Proprietary
Multimodal Support
Text Only
Throughput
104
Release Date
2024-10-22
Response Speed
66.382,095 tokens/s

Benchmark Scores

Below is the performance of Claude 3.5 Haiku in various standard benchmark tests. These tests evaluate the model's capabilities in different tasks and domains.
Intelligence Index
34.74
Large Language Model Intelligence Level
Coding Index
29.43
Indicator of AI model performance on coding tasks
Math Index
37.7
Capability indicator in solving mathematical problems, mathematical reasoning, or performing math-related tasks
MMLU Pro
63.4
Massive Multitask Multimodal Understanding - Testing understanding of text, images, audio, and video
GPQA
40.8
Graduate Physics Questions Assessment - Testing advanced physics knowledge with diamond science-level questions
HLE
3.5
The model's comprehensive average score on the Hugging Face Open LLM Leaderboard
LiveCodeBench
31.4
Specific evaluation focused on assessing large language models' ability in real-world code writing and solving programming competition problems
SciCode
27.4
The model's capability in code generation for scientific computing or specific scientific domains
HumanEval
85.9
Score achieved by the AI model on the specific HumanEval benchmark test set
Math 500 Score
72.1
Score on the first 500 larger, more well-known mathematical benchmark tests
AIME Score
3.3
An indicator measuring an AI model's ability to solve high-difficulty mathematical competition problems (specifically AIME level)
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase