Llama 3.2 Instruct 90B (Vision)
L

Llama 3.2 Instruct 90B (Vision)

Llama 3.2 90B is a large multimodal language model specifically optimized for visual recognition, image reasoning, and image description tasks. It supports a context length of up to 128,000 tokens and is designed for deployment on edge devices and mobile devices, demonstrating state-of-the-art performance in image understanding and generation tasks.
Intelligence(Relatively Weak)
Speed(Slow)
Input Supported Modalities
Yes
Is Reasoning Model
128,000
Context Window
128,000
Maximum Output Tokens
2023-12-01
Knowledge Cutoff

Pricing

¥2.52 /M tokens
Input
¥2.88 /M tokens
Output
¥3.9 /M tokens
Blended Price

Quick Simple Comparison

Input

Output

Llama 4 Scout
¥0.08
Llama 4 Maverick
¥0.17
Llama 3.2 Instruct 1B

Basic Parameters

Llama 3.2 Instruct 90B (Vision)Technical Parameters
Parameter Count
90,000.0M
Context Length
128.00k tokens
Training Data Cutoff
2023-12-01
Open Source Category
Open Weights (Permissive License)
Multimodal Support
Text, Image
Throughput
100
Release Date
2024-09-25
Response Speed
35.141,483 tokens/s

Benchmark Scores

Below is the performance of Llama 3.2 Instruct 90B (Vision) in various standard benchmark tests. These tests evaluate the model's capabilities in different tasks and domains.
Intelligence Index
33.37
Large Language Model Intelligence Level
Coding Index
22.67
Indicator of AI model performance on coding tasks
Math Index
33.97
Capability indicator in solving mathematical problems, mathematical reasoning, or performing math-related tasks
MMLU Pro
67.1
Massive Multitask Multimodal Understanding - Testing understanding of text, images, audio, and video
GPQA
43.2
Graduate Physics Questions Assessment - Testing advanced physics knowledge with diamond science-level questions
HLE
4.9
The model's comprehensive average score on the Hugging Face Open LLM Leaderboard
LiveCodeBench
21.4
Specific evaluation focused on assessing large language models' ability in real-world code writing and solving programming competition problems
SciCode
24
The model's capability in code generation for scientific computing or specific scientific domains
HumanEval
82
Score achieved by the AI model on the specific HumanEval benchmark test set
Math 500 Score
62.9
Score on the first 500 larger, more well-known mathematical benchmark tests
AIME Score
5
An indicator measuring an AI model's ability to solve high-difficulty mathematical competition problems (specifically AIME level)
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase