Llama 3.1 Nemotron Ultra 253B V1 (Reasoning)
Llama 3.1 Nemotron Ultra 253B V1 (Reasoning)
A 253B parameter derivative of Meta Llama 3.1 405B Instruct, developed by NVIDIA using Neural Architecture Search (NAS) and vertical compression. It has undergone multi - stage post - training (SFT for math, code, reasoning, chat, and tool invocation; RL with GRPO) to enhance reasoning and instruction - following capabilities. Optimized for the precision/efficiency trade - off on NVIDIA GPUs. Supports a 128k context.
Intelligence(Medium)
Speed(Slow)
Input Supported Modalities
No
Is Reasoning Model
128,000
Context Window
131,072
Maximum Output Tokens
2023-12-01
Knowledge Cutoff
Pricing
- /M tokens
Input
- /M tokens
Output
¥6.48 /M tokens
Blended Price
Quick Simple Comparison
Llama 3.3 Nemotron Super 49B v1 (Reasoning)
Llama 3.3 Nemotron Super 49B v1
Llama 3.1 Nemotron Ultra 253B v1 (Reasoning)
Basic Parameters
GPT-4.1 Technical Parameters
Parameter Count
253,000.0M
Context Length
128.00k tokens
Training Data Cutoff
2023-12-01
Open Source Category
Open Weights (Permissive License)
Multimodal Support
Text Only
Throughput
Release Date
2025-04-07
Response Speed
41.972,015 tokens/s
Benchmark Scores
Below is the performance of claude-monet in various standard benchmark tests. These tests evaluate the model's capabilities in different tasks and domains.
Intelligence Index
6082
Large Language Model Intelligence Level
Coding Index
4942
Indicator of AI model performance on coding tasks
Math Index
-
Capability indicator in solving mathematical problems, mathematical reasoning, or performing math-related tasks
MMLU Pro
82.5
Massive Multitask Multimodal Understanding - Testing understanding of text, images, audio, and video
GPQA
72.8
Graduate Physics Questions Assessment - Testing advanced physics knowledge with diamond science-level questions
HLE
8.1
The model's comprehensive average score on the Hugging Face Open LLM Leaderboard
LiveCodeBench
64.1
Specific evaluation focused on assessing large language models' ability in real-world code writing and solving programming competition problems
SciCode
34.7
The model's capability in code generation for scientific computing or specific scientific domains
HumanEval
-
Score achieved by the AI model on the specific HumanEval benchmark test set
Math 500 Score
95.2
Score on the first 500 larger, more well-known mathematical benchmark tests
AIME Score
74.7
An indicator measuring an AI model's ability to solve high-difficulty mathematical competition problems (specifically AIME level)
Featured Recommended AI Models
Gemini 2.0 Flash Lite (Preview)
google

¥0.58
Input tokens/million
¥2.16
Output tokens/million
1M
Context Length
Gemini 1.0 Pro
google

¥3.6
Input tokens/million
¥10.8
Output tokens/million
33k
Context Length
Qwen2.5 Coder Instruct 32B
alibaba

¥0.65
Input tokens/million
¥0.65
Output tokens/million
131k
Context Length
GPT 4
openai

¥216
Input tokens/million
¥432
Output tokens/million
8192
Context Length
Gemini 1.5 Flash 8B
google

¥0.58
Input tokens/million
¥2.16
Output tokens/million
1M
Context Length
Gemma 3 4B Instruct
google

-
Input tokens/million
-
Output tokens/million
128k
Context Length
Gemini 2.0 Pro Experimental (Feb '25)
google

-
Input tokens/million
-
Output tokens/million
2M
Context Length
Llama 3.2 Instruct 11B (Vision)
meta

¥0.43
Input tokens/million
¥0.43
Output tokens/million
128k
Context Length