🚀 InternVL3-14B-Instruct
InternVL3-14B-Instruct is an advanced multimodal large language model with superior overall performance, capable of handling various vision - language tasks.
[GitHub] [InternVL 1.0] [InternVL 1.5] [InternVL 2.5] [InternVL 2.5-MPO] [InternVL3]
[Blog] [Chat Demo] [HF Demo] [Quick Start] [Documents]
📚 Documentation
Introduction
This is the SFT version of InternVL3-14B, which has undergone native multimodal pre - training and SFT but has not undergone MPO. If you're unsure which version to use, please use the InternVL3-14B version.
We introduce InternVL3, an advanced multimodal large language model (MLLM) series that demonstrates superior overall performance. Compared to InternVL 2.5, InternVL3 exhibits superior multimodal perception and reasoning capabilities, while further extending its multimodal capabilities to encompass tool usage, GUI agents, industrial image analysis, 3D vision perception, and more.
Additionally, we compare InternVL3 with Qwen2.5 Chat models, whose corresponding pre - trained base models are employed as the initialization of the language component in InternVL3. Benefitting from Native Multimodal Pre - Training, the InternVL3 series achieves even better overall text performance than the Qwen2.5 series.

InternVL3 Family
In the following table, we provide an overview of the InternVL3 series.

Model Architecture
As shown in the following figure, InternVL3 retains the same model architecture as InternVL 2.5 and its predecessors, InternVL 1.5 and 2.0, following the "ViT - MLP - LLM" paradigm. In this new version, we integrate a newly incrementally pre - trained InternViT with various pre - trained LLMs, including InternLM 3 and Qwen 2.5, using a randomly initialized MLP projector.

As in the previous version, we applied a pixel unshuffle operation, reducing the number of visual tokens to one - quarter of the original. Besides, we adopted a similar dynamic resolution strategy as InternVL 1.5, dividing images into tiles of 448×448 pixels. The key difference, starting from InternVL 2.0, is that we additionally introduced support for multi - image and video data.
Notably, in InternVL3, we integrate the Variable Visual Position Encoding (V2PE), which utilizes smaller, more flexible position increments for visual tokens. Benefiting from V2PE, InternVL3 exhibits better long context understanding capabilities compared to its predecessors.
Training Strategy
Native Multimodal Pre - Training
We propose a Native Multimodal Pre - Training approach that consolidates language and vision learning into a single pre - training stage. In contrast to standard paradigms that first train a language - only model and subsequently adapt it to handle additional modalities, our method interleaves multimodal data (e.g., image - text, video - text, or image - text interleaved sequences) with large - scale textual corpora. This unified training scheme allows the model to learn both linguistic and multimodal representations simultaneously, ultimately enhancing its capability to handle vision - language tasks without the need for separate alignment or bridging modules. Please see our paper for more details.
Supervised Fine - Tuning
In this phase, the techniques of random JPEG compression, square loss re - weighting, and multimodal data packing proposed in InternVL2.5 are also employed in the InternVL3 series. The main advancement of the SFT phase in InternVL3 compared to InternVL2.5 lies in the use of higher - quality and more diverse training data. Specifically, we further extend training samples for tool use, 3D scene understanding, GUI operations, long context tasks, video understanding, scientific diagrams, creative writing, and multimodal reasoning.
Mixed Preference Optimization
During Pre - training and SFT, the model is trained to predict the next token conditioned on previous ground - truth tokens. However, during inference, the model predicts each token based on its own prior outputs. This discrepancy between ground - truth tokens and model - predicted tokens introduces a distribution shift, which can impair the model's Chain - of - Thought (CoT) reasoning capabilities. To mitigate this issue, we employ MPO, which introduces additional supervision from both positive and negative samples to align the model response distribution with the ground - truth distribution, thereby improving reasoning performance.
Specifically, the training objective of MPO is a combination of preference loss \(\mathcal{L}{\text{p}}\), quality loss \(\mathcal{L}{\text{q}}\), and generation loss \(\mathcal{L}_{\text{g}}\), which can be formulated as follows:
$$
\mathcal{L}=w_{p}\cdot\mathcal{L}{\text{p}} + w{q}\cdot\mathcal{L}{\text{q}} + w{g}\cdot\mathcal{L}_{\text{g}},
$$
where \(w_{*}\) represents the weight assigned to each loss component. Please see our paper for more details about MPO.
Test - Time Scaling
Test - Time Scaling has been shown to be an effective method to enhance the reasoning abilities of LLMs and MLLMs. In this work, we use the Best - of - N evaluation strategy and employ VisualPRM - 8B as the critic model to select the best response for reasoning and mathematics evaluation.
Evaluation on Multimodal Capability
Multimodal Reasoning and Mathematics

OCR, Chart, and Document Understanding

Multi - Image & Real - World Comprehension

Comprehensive Multimodal & Hallucination Evaluation

Visual Grounding

Multimodal Multilingual Understanding

Video Understanding

GUI Grounding

Spatial Reasoning

Evaluation on Language Capability
We compare InternVL3 with Qwen2.5 Chat models, whose corresponding pre - trained base models are employed as the initialization of the language component in InternVL3. Benefitting from Native Multimodal Pre - Training, the InternVL3 series achieves even better overall text performance than the Qwen2.5 series. Please note that the evaluation scores of Qwen2.5 series may differ from those officially reported, as we have adopted the prompt versions provided in the table across all datasets for OpenCompass evaluation.

Ablation Study
Native Multimodal Pre - Training
We conduct experiments on the InternVL2 - 8B model while keeping its architecture, initialization parameters, and training data entirely unchanged. Traditionally, InternVL2 - 8B employs a training pipeline that begins with an MLP warmup phase for feature alignment followed by an Instruction Tuning stage. In our experiments, we substitute the conventional MLP warmup phase with a native multimodal pre - training process. This modification isolates the contribution of native multimodal pre - training to the overall multimodal capability of the model.
The evaluation results in the Figure below shows that the model with native multimodal pre - training exhibits performance on most benchmarks that is comparable to the fully multi - stage - trained InternVL2 - 8B baseline. Furthermore, when followed by instruction tuning on higher - quality data, the model demonstrates further performance gains across evaluated multimodal tasks.
🔧 Technical Details
Model Information
Property |
Details |
Pipeline Tag |
image - text - to - text |
Library Name |
transformers |
Base Model |
OpenGVLab/InternVL3 - 14B - Instruct |
Base Model Relation |
finetune |
Language |
multilingual |
Tags |
internvl, unsloth, custom_code |
License
The project is licensed under the Apache - 2.0 license.