G

Gemini 1.0 Pro

Gemini 1.0 Pro是一個自然語言處理(NLP)模型,專為多回合文本和代碼聊天以及代碼生成等任務而設計。它支持文本輸入和輸出,非常適合自然語言任務。該模型針對處理複雜的對話和生成代碼片段進行了優化。它提供可調整的安全設置並支持函數調用,但不支持JSON模式、JSON模式或系統指令。最新的穩定版本是gemini-1.0-pro-001,最後一次更新是在2024年2月。
Intelligence(Weak)
Speed(Slow)
Input Supported Modalities
No
Is Reasoning Model
32,768
Context Window
8,192
Maximum Output Tokens
2024-02-01
Knowledge Cutoff
Pricing
¥3.6 /M tokens
Input
¥10.8 /M tokens
Output
¥5.4 /M tokens
Blended Price
Quick Simple Comparison
Gemini 2.0 Pro Experimental (Feb '25)
Gemini 1.5 Pro (May '24)
¥2.5
Gemini 2.0 Flash Thinking Experimental (Dec '24)
Basic Parameters
GPT-4.1 Technical Parameters
Parameter Count
Not Announced
Context Length
32.77k tokens
Training Data Cutoff
2024-02-01
Open Source Category
Proprietary
Multimodal Support
Text Only
Throughput
120
Release Date
2023-12-06
Response Speed
0 tokens/s
Benchmark Scores
Below is the performance of claude-monet in various standard benchmark tests. These tests evaluate the model's capabilities in different tasks and domains.
Intelligence Index
2059
Large Language Model Intelligence Level
Coding Index
1167
Indicator of AI model performance on coding tasks
Math Index
-
Capability indicator in solving mathematical problems, mathematical reasoning, or performing math-related tasks
MMLU Pro
43.1
Massive Multitask Multimodal Understanding - Testing understanding of text, images, audio, and video
GPQA
27.7
Graduate Physics Questions Assessment - Testing advanced physics knowledge with diamond science-level questions
HLE
4.6
The model's comprehensive average score on the Hugging Face Open LLM Leaderboard
LiveCodeBench
11.6
Specific evaluation focused on assessing large language models' ability in real-world code writing and solving programming competition problems
SciCode
11.7
The model's capability in code generation for scientific computing or specific scientific domains
HumanEval
2.2
Score achieved by the AI model on the specific HumanEval benchmark test set
Math 500 Score
40.3
Score on the first 500 larger, more well-known mathematical benchmark tests
AIME Score
0.7
An indicator measuring an AI model's ability to solve high-difficulty mathematical competition problems (specifically AIME level)
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase