Gemini 2.5 Pro Preview (May' 25)
G

Gemini 2.5 Pro Preview (May' 25)

Gemini 2.5 Pro leads in common benchmark tests, with enhanced reasoning capabilities, multimodal functions (supporting text, image, video, and audio inputs), and a context window of 1 million words.
Intelligence(Relatively Strong)
Speed(Medium)
Input Supported Modalities
Yes
Is Reasoning Model
1,000,000
Context Window
65,535
Maximum Output Tokens
2025-01-31
Knowledge Cutoff

Pricing

¥9 /M tokens
Input
¥72 /M tokens
Output
¥24.75 /M tokens
Blended Price

Quick Simple Comparison

Input

Output

Gemini 2.0 Flash Thinking Experimental (Dec '24)
Gemini 1.5 Pro (Sep '24)
¥2.5
Gemini 2.0 Pro Experimental (Feb '25)

Basic Parameters

Gemini 2.5 Pro Preview (May' 25)Technical Parameters
Parameter Count
Not Announced
Context Length
1.0M tokens
Training Data Cutoff
2025-01-31
Open Source Category
Proprietary
Multimodal Support
Text, Image
Throughput
85
Release Date
2025-05-06
Response Speed
146.21,327 tokens/s

Benchmark Scores

Below is the performance of Gemini 2.5 Pro Preview (May' 25) in various standard benchmark tests. These tests evaluate the model's capabilities in different tasks and domains.
Intelligence Index
67.92
Large Language Model Intelligence Level
Coding Index
59.29
Indicator of AI model performance on coding tasks
Math Index
91.47
Capability indicator in solving mathematical problems, mathematical reasoning, or performing math-related tasks
MMLU Pro
83.7
Massive Multitask Multimodal Understanding - Testing understanding of text, images, audio, and video
GPQA
82.2
Graduate Physics Questions Assessment - Testing advanced physics knowledge with diamond science-level questions
HLE
15.4
The model's comprehensive average score on the Hugging Face Open LLM Leaderboard
LiveCodeBench
77
Specific evaluation focused on assessing large language models' ability in real-world code writing and solving programming competition problems
SciCode
41.6
The model's capability in code generation for scientific computing or specific scientific domains
HumanEval
98.9
Score achieved by the AI model on the specific HumanEval benchmark test set
Math 500 Score
98.6
Score on the first 500 larger, more well-known mathematical benchmark tests
AIME Score
84.3
An indicator measuring an AI model's ability to solve high-difficulty mathematical competition problems (specifically AIME level)
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase