xAI 社が開発した第一世代の Grok モデルで、その独特のユーモアと率直な対話スタイルで知られています。強力なリアルタイム情報取得能力を備え、最新のウェブ情報にアクセスできます。正確性を維持しつつ、より自然で興味深い対話スタイルを持っています。パーソナライズされた AI アシスタントが必要なアプリケーションシナリオ、特にソーシャルメディア、エンターテイメント、クリエイティブな対話が必要な分野に適しています。
Intelligence(Relatively Weak)
Speed(Slow)
Input Supported Modalities
No
Is Reasoning Model
8,192
Context Window
128,000
Maximum Output Tokens
-
Knowledge Cutoff

Pricing

- /M tokens
Input
- /M tokens
Output
- /M tokens
Blended Price

Quick Simple Comparison

Input

Output

Grok 3 mini Reasoning
¥0.3
Grok 3 mini Reasoning (Low)
¥0.3
Grok 3
¥3

Basic Parameters

Grok-1Technical Parameters
Parameter Count
Not Announced
Context Length
8,192 tokens
Training Data Cutoff
Open Source Category
Open Weights (Permissive License)
Multimodal Support
Text Only
Throughput
Release Date
2024-03-17
Response Speed
0 tokens/s

Benchmark Scores

Below is the performance of Grok-1 in various standard benchmark tests. These tests evaluate the model's capabilities in different tasks and domains.
Intelligence Index
32.69
Large Language Model Intelligence Level
Coding Index
-
Indicator of AI model performance on coding tasks
Math Index
-
Capability indicator in solving mathematical problems, mathematical reasoning, or performing math-related tasks
MMLU Pro
51
Massive Multitask Multimodal Understanding - Testing understanding of text, images, audio, and video
GPQA
35.9
Graduate Physics Questions Assessment - Testing advanced physics knowledge with diamond science-level questions
HLE
-
The model's comprehensive average score on the Hugging Face Open LLM Leaderboard
LiveCodeBench
-
Specific evaluation focused on assessing large language models' ability in real-world code writing and solving programming competition problems
SciCode
-
The model's capability in code generation for scientific computing or specific scientific domains
HumanEval
74.1
Score achieved by the AI model on the specific HumanEval benchmark test set
Math 500 Score
51
Score on the first 500 larger, more well-known mathematical benchmark tests
AIME Score
-
An indicator measuring an AI model's ability to solve high-difficulty mathematical competition problems (specifically AIME level)
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase