L

Llama 3.3 Nemotron Super 49B V1

Llama-3.3-Nemotron-Super-49B-v1是从Meta Llama-3.3.70B-Instruct派生出来的大型语言模型(LLM)。它经过推理、聊天、RAG和工具调用的后期训练,在准确性和效率之间实现了平衡(针对单个H100进行了优化)。它经历了多阶段的岗位培训,包括SFT和RL(RLOO、RPO)。
Intelligence(Relatively Weak)
Speed(Slow)
Input Supported Modalities
No
Is Reasoning Model
128,000
Context Window
131,072
Maximum Output Tokens
2023-12-31
Knowledge Cutoff
Pricing
- /M tokens
Input
- /M tokens
Output
- /M tokens
Blended Price
Quick Simple Comparison
Llama 3.1 Nemotron Ultra 253B v1 (Reasoning)
Llama 3.3 Nemotron Super 49B v1 (Reasoning)
Llama 3.1 Nemotron Instruct 70B
Basic Parameters
GPT-4.1 Technical Parameters
Parameter Count
49,900.0M
Context Length
128.00k tokens
Training Data Cutoff
2023-12-31
Open Source Category
Open Weights (Permissive License)
Multimodal Support
Text Only
Throughput
Release Date
2025-03-18
Response Speed
0 tokens/s
Benchmark Scores
Below is the performance of claude-monet in various standard benchmark tests. These tests evaluate the model's capabilities in different tasks and domains.
Intelligence Index
3932
Large Language Model Intelligence Level
Coding Index
2548
Indicator of AI model performance on coding tasks
Math Index
-
Capability indicator in solving mathematical problems, mathematical reasoning, or performing math-related tasks
MMLU Pro
69.8
Massive Multitask Multimodal Understanding - Testing understanding of text, images, audio, and video
GPQA
51.7
Graduate Physics Questions Assessment - Testing advanced physics knowledge with diamond science-level questions
HLE
3.5
The model's comprehensive average score on the Hugging Face Open LLM Leaderboard
LiveCodeBench
28
Specific evaluation focused on assessing large language models' ability in real-world code writing and solving programming competition problems
SciCode
22.9
The model's capability in code generation for scientific computing or specific scientific domains
HumanEval
83.4
Score achieved by the AI model on the specific HumanEval benchmark test set
Math 500 Score
77.5
Score on the first 500 larger, more well-known mathematical benchmark tests
AIME Score
19.3
An indicator measuring an AI model's ability to solve high-difficulty mathematical competition problems (specifically AIME level)
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase