Model Overview
Model Features
Model Capabilities
Use Cases
๐ InternVL3-38B
InternVL3-38B is an advanced multimodal large language model that combines powerful vision and language capabilities, achieving excellent performance in various multimodal tasks.
[๐ด GitHub] [๐ InternVL 1.0] [๐ InternVL 1.5] [๐ InternVL 2.5] [๐ InternVL2.5-MPO] [๐ InternVL3]
[๐ Blog] [๐ฌ Chat Demo] [๐ HF Demo] [๐ Quick Start] [๐ Documents]

โจ Features
Overall Performance
We introduce InternVL3, an advanced multimodal large language model (MLLM) series that demonstrates superior overall performance. Compared to InternVL 2.5, InternVL3 exhibits superior multimodal perception and reasoning capabilities, while further extending its multimodal capabilities to encompass tool usage, GUI agents, industrial image analysis, 3D vision perception, and more. Additionally, benefiting from Native Multimodal Pre - Training, the InternVL3 series achieves even better overall text performance than the Qwen2.5 series.
Model Architecture
InternVL3 retains the "ViT - MLP - LLM" paradigm. We integrate a newly incrementally pre - trained InternViT with various pre - trained LLMs, including InternLM 3 and Qwen 2.5, using a randomly initialized MLP projector. It also applies a pixel unshuffle operation, a dynamic resolution strategy, and supports multi - image and video data. Moreover, it integrates the Variable Visual Position Encoding (V2PE), which enhances long - context understanding capabilities.
Training Strategy
- Native Multimodal Pre - Training: Consolidates language and vision learning into a single pre - training stage, interleaving multimodal data with large - scale textual corpora.
- Supervised Fine - Tuning: Employs techniques from InternVL2.5 and uses higher - quality and more diverse training data.
- Mixed Preference Optimization: Uses MPO to align the model response distribution with the ground - truth distribution, improving reasoning performance.
- Test - Time Scaling: Uses the Best - of - N evaluation strategy and [VisualPRM - 8B](https://huggingface.co/OpenGVLab/VisualPRM - 8B) as the critic model for reasoning and mathematics evaluation.
๐ฆ Installation
The README does not provide specific installation steps. However, to use the model, you need to ensure that the transformers
library is installed.
โ ๏ธ Important Note
Please use
transformers>=4.37.2
to ensure the model works normally.
๐ป Usage Examples
Basic Usage
import torch
from transformers import AutoTokenizer, AutoModel
path = "OpenGVLab/InternVL3-38B"
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True).eval().cuda()
Advanced Usage
BNB 8 - bit Quantization
import torch
from transformers import AutoTokenizer, AutoModel
path = "OpenGVLab/InternVL3-38B"
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
load_in_8bit=True,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True).eval()
Multiple GPUs
import math
import torch
from transformers import AutoTokenizer, AutoModel
def split_model(model_name):
device_map = {}
world_size = torch.cuda.device_count()
config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
num_layers = config.llm_config.num_hidden_layers
# Since the first GPU will be used for ViT, treat it as half a GPU.
num_layers_per_gpu = math.ceil(num_layers / (world_size - 0.5))
num_layers_per_gpu = [num_layers_per_gpu] * world_size
num_layers_per_gpu[0] = math.ceil(num_layers_per_gpu[0] * 0.5)
layer_cnt = 0
for i, num_layer in enumerate(num_layers_per_gpu):
for j in range(num_layer):
device_map[f'language_model.model.layers.{layer_cnt}'] = i
layer_cnt += 1
device_map['vision_model'] = 0
device_map['mlp1'] = 0
device_map['language_model.model.tok_embeddings'] = 0
device_map['language_model.model.embed_tokens'] = 0
device_map['language_model.output'] = 0
device_map['language_model.model.norm'] = 0
device_map['language_model.model.rotary_emb'] = 0
device_map['language_model.lm_head'] = 0
device_map[f'language_model.model.layers.{num_layers - 1}'] = 0
return device_map
path = "OpenGVLab/InternVL3-38B"
device_map = split_model('InternVL3-38B')
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True,
device_map=device_map).eval()
๐ Documentation
InternVL3 Family
Model Name | Vision Part | Language Part | HF Link |
---|---|---|---|
InternVL3-1B | InternViT-300M-448px-V2_5 | Qwen2.5-0.5B | ๐ link |
InternVL3-2B | InternViT-300M-448px-V2_5 | Qwen2.5-1.5B | ๐ link |
InternVL3-8B | InternViT-300M-448px-V2_5 | Qwen2.5-7B | ๐ link |
InternVL3-9B | InternViT-300M-448px-V2_5 | internlm3-8b-instruct | ๐ link |
InternVL3-14B | InternViT-300M-448px-V2_5 | Qwen2.5-14B | ๐ link |
InternVL3-38B | InternViT-6B-448px-V2_5 | Qwen2.5-32B | ๐ link |
InternVL3-78B | InternViT-6B-448px-V2_5 | Qwen2.5-72B | ๐ link |
Model Architecture
As shown in the following figure, InternVL3 retains the same model architecture as InternVL 2.5 and its predecessors, InternVL 1.5 and 2.0, following the "ViT - MLP - LLM" paradigm. In this new version, we integrate a newly incrementally pre - trained InternViT with various pre - trained LLMs, including InternLM 3 and Qwen 2.5, using a randomly initialized MLP projector.
As in the previous version, we applied a pixel unshuffle operation, reducing the number of visual tokens to one - quarter of the original. Besides, we adopted a similar dynamic resolution strategy as InternVL 1.5, dividing images into tiles of 448ร448 pixels. The key difference, starting from InternVL 2.0, is that we additionally introduced support for multi - image and video data.
Notably, in InternVL3, we integrate the Variable Visual Position Encoding (V2PE), which utilizes smaller, more flexible position increments for visual tokens. Benefiting from V2PE, InternVL3 exhibits better long context understanding capabilities compared to its predecessors.
Training Strategy
Native Multimodal Pre - Training
We propose a Native Multimodal Pre - Training approach that consolidates language and vision learning into a single pre - training stage. In contrast to standard paradigms that first train a language - only model and subsequently adapt it to handle additional modalities, our method interleaves multimodal data (e.g., image - text, video - text, or image - text interleaved sequences) with large - scale textual corpora. This unified training scheme allows the model to learn both linguistic and multimodal representations simultaneously, ultimately enhancing its capability to handle vision - language tasks without the need for separate alignment or bridging modules. Please see our paper for more details.
Supervised Fine - Tuning
In this phase, the techniques of random JPEG compression, square loss re - weighting, and multimodal data packing proposed in InternVL2.5 are also employed in the InternVL3 series. The main advancement of the SFT phase in InternVL3 compared to InternVL2.5 lies in the use of higher - quality and more diverse training data. Specifically, we further extend training samples for tool use, 3D scene understanding, GUI operations, long context tasks, video understanding, scientific diagrams, creative writing, and multimodal reasoning.
Mixed Preference Optimization
During Pre - training and SFT, the model is trained to predict the next token conditioned on previous ground - truth tokens. However, during inference, the model predicts each token based on its own prior outputs. This discrepancy between ground - truth tokens and model - predicted tokens introduces a distribution shift, which can impair the model's Chain - of - Thought (CoT) reasoning capabilities. To mitigate this issue, we employ MPO, which introduces additional supervision from both positive and negative samples to align the model response distribution with the ground - truth distribution, thereby improving reasoning performance. Specifically, the training objective of MPO is a combination of preference loss \(\mathcal{L}{\text{p}}\), quality loss \(\mathcal{L}{\text{q}}\), and generation loss \(\mathcal{L}_{\text{g}}\), which can be formulated as follows:
$$ \mathcal{L}=w_{p}\cdot\mathcal{L}{\text{p}} + w{q}\cdot\mathcal{L}{\text{q}} + w{g}\cdot\mathcal{L}_{\text{g}}, $$
where \(w_{*}\) represents the weight assigned to each loss component. Please see our paper for more details about MPO.
Test - Time Scaling
Test - Time Scaling has been shown to be an effective method to enhance the reasoning abilities of LLMs and MLLMs. In this work, we use the Best - of - N evaluation strategy and employ [VisualPRM - 8B](https://huggingface.co/OpenGVLab/VisualPRM - 8B) as the critic model to select the best response for reasoning and mathematics evaluation.
Evaluation
Multimodal Capability
- Multimodal Reasoning and Mathematics
- OCR, Chart, and Document Understanding
- Multi - Image & Real - World Comprehension
- Comprehensive Multimodal & Hallucination Evaluation
- Visual Grounding
- Multimodal Multilingual Understanding
- Video Understanding
- GUI Grounding
- Spatial Reasoning
Language Capability
We compare InternVL3 with Qwen2.5 Chat models, whose corresponding pre - trained base models are employed as the initialization of the language component in InternVL3. Benefitting from Native Multimodal Pre - Training, the InternVL3 series achieves even better overall text performance than the Qwen2.5 series. Please note that the evaluation scores of Qwen2.5 series may differ from those officially reported, as we have adopted the prompt versions provided in the table across all datasets for OpenCompass evaluation.
Ablation Study
Native Multimodal Pre - Training
We conduct experiments on the InternVL2 - 8B model while keeping its architecture, initialization parameters, and training data entirely unchanged. Traditionally, InternVL2 - 8B employs a training pipeline that begins with an MLP warm - up phase for feature alignment followed by an Instruction Tuning stage. In our experiments, we substitute the conventional MLP warm - up phase with a native multimodal pre - training process. This modification isolates the contribution of native multimodal pre - training to the overall multimodal capability of the model.
The evaluation results in the Figure below shows that the model with native multimodal pre - training exhibits performance on most benchmarks that is comparable to the fully multi - stage - trained InternVL2 - 8B baseline. Furthermore, when followed by instruction tuning on higher - quality data, the model demonstrates further performance gains across evaluated multimodal tasks. These findings underscore the efficiency of native multimodal pre - training in imparting powerful multimodal capabilities to MLLMs.
Mixed Preference Optimization
As shown in the table below, models fine - tuned with MPO demonstrate superior reasoning performance across seven multimodal reasoning benchmarks compared to their counterparts without MPO. Specifically, InternVL3 - 78B and InternVL3 - 38B outperform their counterparts by 4.1 and 4.5 points, respectively. Notably, the training data used for MPO is a subset of that used for SFT, indicating that the performance improvements primarily stem from the training algorithm rather than the training data.
Variable Visual Position Encoding
As reported in the table below, the introduction of V2PE leads to significant performance gains across most evaluation metrics. In addition, our ablation studiesโby varying the positional increment \( \delta \)โreveal that even for tasks primarily involving conventional contexts, relatively small \( \delta \) values can achieve optimal performance. These findings provide important insights for future efforts aimed at refining position encoding strategies for visual tokens in MLLMs.
๐ง Technical Details
Model Information
Property | Details |
---|---|
Pipeline Tag | image - text - to - text |
Library Name | transformers |
Base Model | OpenGVLab/InternViT - 6B - 448px - V2_5, Qwen/Qwen2.5 - 32B |
Base Model Relation | merge |
Datasets | OpenGVLab/MMPR - v1.2 |
Language | multilingual |
Tags | internvl, custom_code |
License
The model uses the qwen license.







