G

GLM 4 Air 250414

A lightweight and cost-effective inference model launched by Zhipu AI, optimized based on the GLM-4 architecture and supporting a 128K long context window. The output style is standardized and concise, with an average generated length of 3,800 characters. The comprehensive cost is 30 times lower than that of similar models, making it suitable for high-frequency inference scenarios.
Intelligence(Weak)
Speed(Relatively Slow)
Input Supported Modalities
No
Is Reasoning Model
128,000
Context Window
0
Maximum Output Tokens
2024-10-31
Knowledge Cutoff
Pricing
¥0.5 /M tokens
Input
- /M tokens
Output
¥0.5 /M tokens
Blended Price
Quick Simple Comparison
GLM-4-Air-250414
¥0.07
GLM-4-Plus
¥0.63
Basic Parameters
GPT-4.1 Technical Parameters
Parameter Count
Not Announced
Context Length
128.00k tokens
Training Data Cutoff
2024-10-31
Open Source Category
Proprietary
Multimodal Support
Text Only
Throughput
0
Release Date
2025-04-15
Response Speed
64 tokens/s
Benchmark Scores
Below is the performance of claude-monet in various standard benchmark tests. These tests evaluate the model's capabilities in different tasks and domains.
Intelligence Index
-
Large Language Model Intelligence Level
Coding Index
-
Indicator of AI model performance on coding tasks
Math Index
-
Capability indicator in solving mathematical problems, mathematical reasoning, or performing math-related tasks
MMLU Pro
-
Massive Multitask Multimodal Understanding - Testing understanding of text, images, audio, and video
GPQA
-
Graduate Physics Questions Assessment - Testing advanced physics knowledge with diamond science-level questions
HLE
-
The model's comprehensive average score on the Hugging Face Open LLM Leaderboard
LiveCodeBench
-
Specific evaluation focused on assessing large language models' ability in real-world code writing and solving programming competition problems
SciCode
-
The model's capability in code generation for scientific computing or specific scientific domains
HumanEval
-
Score achieved by the AI model on the specific HumanEval benchmark test set
Math 500 Score
-
Score on the first 500 larger, more well-known mathematical benchmark tests
AIME Score
-
An indicator measuring an AI model's ability to solve high-difficulty mathematical competition problems (specifically AIME level)
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase