G

Gemini 1.5 Flash 8B

音声、画像、ビデオ、テキストを効率的に処理できるマルチモーダルモデルです。JSONモード、関数呼び出し、コード実行、システム命令のサポート機能を備えています。高速推論に最適化されており、パラメータ数は80億です。
Intelligence(Relatively Weak)
Speed(Fast)
Input Supported Modalities
Yes
Is Reasoning Model
1,048,576
Context Window
8,192
Maximum Output Tokens
2024-10-01
Knowledge Cutoff
Pricing
¥0.58 /M tokens
Input
¥2.16 /M tokens
Output
¥0.47 /M tokens
Blended Price
Quick Simple Comparison
Gemini 2.0 Flash Thinking Experimental (Dec '24)
Gemini 2.0 Pro Experimental (Feb '25)
Gemini 1.5 Pro (May '24)
¥2.5
Basic Parameters
GPT-4.1 Technical Parameters
Parameter Count
8,000.0M
Context Length
1.0M tokens
Training Data Cutoff
2024-10-01
Open Source Category
Proprietary
Multimodal Support
Text, Image
Throughput
150
Release Date
2024-10-03
Response Speed
279.24,124 tokens/s
Benchmark Scores
Below is the performance of claude-monet in various standard benchmark tests. These tests evaluate the model's capabilities in different tasks and domains.
Intelligence Index
3083
Large Language Model Intelligence Level
Coding Index
2230
Indicator of AI model performance on coding tasks
Math Index
-
Capability indicator in solving mathematical problems, mathematical reasoning, or performing math-related tasks
MMLU Pro
56.9
Massive Multitask Multimodal Understanding - Testing understanding of text, images, audio, and video
GPQA
35.9
Graduate Physics Questions Assessment - Testing advanced physics knowledge with diamond science-level questions
HLE
4.5
The model's comprehensive average score on the Hugging Face Open LLM Leaderboard
LiveCodeBench
21.7
Specific evaluation focused on assessing large language models' ability in real-world code writing and solving programming competition problems
SciCode
22.9
The model's capability in code generation for scientific computing or specific scientific domains
HumanEval
11.6
Score achieved by the AI model on the specific HumanEval benchmark test set
Math 500 Score
68.9
Score on the first 500 larger, more well-known mathematical benchmark tests
AIME Score
3.3
An indicator measuring an AI model's ability to solve high-difficulty mathematical competition problems (specifically AIME level)
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase