
Claude 3.5 Sonnet (June '24)
June 2024 release of Claude 3.5 Sonnet from Anthropic, marking first appearance of Claude 3.5 series. Significant improvements in reasoning capabilities, code generation, and conversation quality compared to previous versions. This version established Claude's position in high-quality AI assistant field, particularly outstanding in complex reasoning and creative tasks, laying solid foundation for subsequent versions.
Intelligence(Relatively Weak)
Speed(Relatively Slow)
Input Supported Modalities
Yes
Is Reasoning Model
200,000
Context Window
-
Maximum Output Tokens
-
Knowledge Cutoff
Pricing
- /M tokens
Input
- /M tokens
Output
¥43.2 /M tokens
Blended Price
Quick Simple Comparison
Claude Opus 4.1
¥15
Claude 4 Sonnet
¥3
Claude 3.5 Haiku
¥0.8
Basic Parameters
Claude 3.5 Sonnet (June '24)Technical Parameters
Parameter Count
Not Announced
Context Length
200.00k tokens
Training Data Cutoff
Open Source Category
Proprietary
Multimodal Support
Text Only
Throughput
Release Date
2024-06-21
Response Speed
79.61,809 tokens/s
Benchmark Scores
Below is the performance of Claude 3.5 Sonnet (June '24) in various standard benchmark tests. These tests evaluate the model's capabilities in different tasks and domains.
Intelligence Index
40
Large Language Model Intelligence Level
Coding Index
-
Indicator of AI model performance on coding tasks
Math Index
39.6
Capability indicator in solving mathematical problems, mathematical reasoning, or performing math-related tasks
MMLU Pro
75.1
Massive Multitask Multimodal Understanding - Testing understanding of text, images, audio, and video
GPQA
56
Graduate Physics Questions Assessment - Testing advanced physics knowledge with diamond science-level questions
HLE
3.7
The model's comprehensive average score on the Hugging Face Open LLM Leaderboard
LiveCodeBench
-
Specific evaluation focused on assessing large language models' ability in real-world code writing and solving programming competition problems
SciCode
31.6
The model's capability in code generation for scientific computing or specific scientific domains
HumanEval
89.9
Score achieved by the AI model on the specific HumanEval benchmark test set
Math 500 Score
69.5
Score on the first 500 larger, more well-known mathematical benchmark tests
AIME Score
9.7
An indicator measuring an AI model's ability to solve high-difficulty mathematical competition problems (specifically AIME level)
GPT 5 Mini
openai

¥1.8
Input tokens/million
¥14.4
Output tokens/million
400k
Context Length
GPT 5 Standard
openai

¥63
Input tokens/million
¥504
Output tokens/million
400k
Context Length
GPT 5 Nano
openai

¥0.36
Input tokens/million
¥2.88
Output tokens/million
400k
Context Length
GPT 5
openai

¥9
Input tokens/million
¥72
Output tokens/million
400k
Context Length
GLM 4.5
chatglm

¥0.43
Input tokens/million
¥1.01
Output tokens/million
131k
Context Length
Gemini 2.0 Flash Lite (Preview)
google

¥0.58
Input tokens/million
¥2.16
Output tokens/million
1M
Context Length
Gemini 1.0 Pro
google

¥3.6
Input tokens/million
¥10.8
Output tokens/million
33k
Context Length
Qwen2.5 Coder Instruct 32B
alibaba

¥0.65
Input tokens/million
¥0.65
Output tokens/million
131k
Context Length