Eagle2 9B
Eagle2-9B是NVIDIA發佈的最新視覺語言模型(VLM),在性能和推理速度之間實現了完美平衡。它基於Qwen2.5-7B-Instruct語言模型和Siglip+ConvNext視覺模型構建,支持多語言和多模態任務。
下載量 944
發布時間 : 1/10/2025
模型概述
Eagle2-9B是一個高性能的開源視覺語言模型,專注於從數據中心視角優化VLM後訓練。它通過結合穩健的訓練方案和模型設計,在多項基準測試中表現出色。
模型特點
高性能平衡
在8.9B參數規模下實現了性能與推理速度的完美平衡
多模態支持
支持文本、圖像和視頻輸入,處理多種模態信息
長上下文處理
支持長達16K的上下文長度
基準測試領先
在多個視覺語言基準測試中表現優於同類模型
模型能力
圖像理解
文本生成
多模態對話
文檔問答
圖表理解
視頻分析
使用案例
文檔處理
DocVQA文檔問答
從文檔圖像中提取信息並回答問題
在DocVQA測試集上達到92.6分
視覺問答
TextVQA文本視覺問答
回答關於圖像中文本內容的問題
在TextVQA驗證集上達到83.0分
圖表理解
ChartQA圖表問答
理解和回答基於圖表數據的問題
在ChartQA測試集上達到86.4分
🚀 Eagle-2
Eagle-2是一款最新的視覺語言模型,它結合了多種基礎模型的優勢,在多語言環境下具有出色的性能。該模型旨在縮小開源視覺語言模型與專有模型之間的差距,通過公開數據策略和實現細節,促進社區的可重複性和創新。
[📂 GitHub] [📜 Eagle2 Tech Report] [🗨️ Chat Demo] [🤗 HF Demo]
🚀 快速開始
我們提供了一個演示推理腳本,幫助你快速開始使用該模型。我們支持以下不同的輸入類型:
- 純文本輸入
- 單張圖像輸入
- 多張圖像輸入
- 視頻輸入
0. 安裝依賴項
pip install transformers==4.37.2
pip install flash-attn
⚠️ 重要提示
最新版本的
transformers
與該模型不兼容。
1. 準備模型工作器
點擊展開
"""
A model worker executes the model.
Copied and modified from https://github.com/OpenGVLab/InternVL/blob/main/streamlit_demo/model_worker.py
"""
# Importing torch before transformers can cause `segmentation fault`
from transformers import AutoModel, AutoTokenizer, TextIteratorStreamer, AutoConfig
import argparse
import base64
import json
import os
import decord
import threading
import time
from io import BytesIO
from threading import Thread
import math
import requests
import torch
import torchvision.transforms as T
from PIL import Image
from torchvision.transforms.functional import InterpolationMode
import numpy as np
IMAGENET_MEAN = (0.485, 0.456, 0.406)
IMAGENET_STD = (0.229, 0.224, 0.225)
SIGLIP_MEAN = (0.5, 0.5, 0.5)
SIGLIP_STD = (0.5, 0.5, 0.5)
def get_seq_frames(total_num_frames, desired_num_frames=-1, stride=-1):
"""
Calculate the indices of frames to extract from a video.
Parameters:
total_num_frames (int): Total number of frames in the video.
desired_num_frames (int): Desired number of frames to extract.
Returns:
list: List of indices of frames to extract.
"""
assert desired_num_frames > 0 or stride > 0 and not (desired_num_frames > 0 and stride > 0)
if stride > 0:
return list(range(0, total_num_frames, stride))
# Calculate the size of each segment from which a frame will be extracted
seg_size = float(total_num_frames - 1) / desired_num_frames
seq = []
for i in range(desired_num_frames):
# Calculate the start and end indices of each segment
start = int(np.round(seg_size * i))
end = int(np.round(seg_size * (i + 1)))
# Append the middle index of the segment to the list
seq.append((start + end) // 2)
return seq
def build_video_prompt(meta_list, num_frames, time_position=False):
# if time_position is True, the frame_timestamp is used.
# 1. pass time_position, 2. use env TIME_POSITION
time_position = os.environ.get("TIME_POSITION", time_position)
prefix = f"This is a video:\n"
for i in range(num_frames):
if time_position:
frame_txt = f"Frame {i+1} sampled at {meta_list[i]:.2f} seconds: <image>\n"
else:
frame_txt = f"Frame {i+1}: <image>\n"
prefix += frame_txt
return prefix
def load_video(video_path, num_frames=64, frame_cache_root=None):
if isinstance(video_path, str):
video = decord.VideoReader(video_path)
elif isinstance(video_path, dict):
assert False, 'we not support vidoe: "video_path" as input'
fps = video.get_avg_fps()
sampled_frames = get_seq_frames(len(video), num_frames)
samepld_timestamps = [i / fps for i in sampled_frames]
frames = video.get_batch(sampled_frames).asnumpy()
images = [Image.fromarray(frame) for frame in frames]
return images, build_video_prompt(samepld_timestamps, len(images), time_position=True)
def load_image(image):
if isinstance(image, str) and os.path.exists(image):
return Image.open(image)
elif isinstance(image, dict):
if 'disk_path' in image:
return Image.open(image['disk_path'])
elif 'base64' in image:
return Image.open(BytesIO(base64.b64decode(image['base64'])))
elif 'url' in image:
response = requests.get(image['url'])
return Image.open(BytesIO(response.content))
elif 'bytes' in image:
return Image.open(BytesIO(image['bytes']))
else:
raise ValueError(f'Invalid image: {image}')
else:
raise ValueError(f'Invalid image: {image}')
def build_transform(input_size, norm_type='imagenet'):
if norm_type == 'imagenet':
MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
elif norm_type == 'siglip':
MEAN, STD = SIGLIP_MEAN, SIGLIP_STD
transform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
T.ToTensor(),
T.Normalize(mean=MEAN, std=STD)
])
return transform
def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
"""
previous version mainly foucs on ratio.
We also consider area ratio here.
"""
best_factor = float('-inf')
best_ratio = (1, 1)
area = width * height
for ratio in target_ratios:
target_aspect_ratio = ratio[0] / ratio[1]
ratio_diff = abs(aspect_ratio - target_aspect_ratio)
area_ratio = (ratio[0]*ratio[1]*image_size*image_size)/ area
"""
new area > 60% of original image area is enough.
"""
factor_based_on_area_n_ratio = min((ratio[0]*ratio[1]*image_size*image_size)/ area, 0.6)* \
min(target_aspect_ratio/aspect_ratio, aspect_ratio/target_aspect_ratio)
if factor_based_on_area_n_ratio > best_factor:
best_factor = factor_based_on_area_n_ratio
best_ratio = ratio
return best_ratio
def dynamic_preprocess(image, min_num=1, max_num=6, image_size=448, use_thumbnail=False):
orig_width, orig_height = image.size
aspect_ratio = orig_width / orig_height
# calculate the existing image aspect ratio
target_ratios = set(
(i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
i * j <= max_num and i * j >= min_num)
target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
# find the closest aspect ratio to the target
target_aspect_ratio = find_closest_aspect_ratio(
aspect_ratio, target_ratios, orig_width, orig_height, image_size)
# calculate the target width and height
target_width = image_size * target_aspect_ratio[0]
target_height = image_size * target_aspect_ratio[1]
blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
# resize the image
resized_img = image.resize((target_width, target_height))
processed_images = []
for i in range(blocks):
box = (
(i % (target_width // image_size)) * image_size,
(i // (target_width // image_size)) * image_size,
((i % (target_width // image_size)) + 1) * image_size,
((i // (target_width // image_size)) + 1) * image_size
)
# split the image
split_img = resized_img.crop(box)
processed_images.append(split_img)
assert len(processed_images) == blocks
if use_thumbnail and len(processed_images) != 1:
thumbnail_img = image.resize((image_size, image_size))
processed_images.append(thumbnail_img)
return processed_images
def split_model(model_path, device):
device_map = {}
world_size = torch.cuda.device_count()
config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
num_layers = config.llm_config.num_hidden_layers
print('world_size', world_size)
num_layers_per_gpu_ = math.floor(num_layers / (world_size - 1))
num_layers_per_gpu = [num_layers_per_gpu_] * world_size
num_layers_per_gpu[device] = num_layers - num_layers_per_gpu_ * (world_size-1)
print(num_layers_per_gpu)
layer_cnt = 0
for i, num_layer in enumerate(num_layers_per_gpu):
for j in range(num_layer):
device_map[f'language_model.model.layers.{layer_cnt}'] = i
layer_cnt += 1
device_map['vision_model'] = device
device_map['mlp1'] = device
device_map['language_model.model.tok_embeddings'] = device
device_map['language_model.model.embed_tokens'] = device
device_map['language_model.output'] = device
device_map['language_model.model.norm'] = device
device_map['language_model.lm_head'] = device
device_map['language_model.model.rotary_emb'] = device
device_map[f'language_model.model.layers.{num_layers - 1}'] = device
return device_map
class ModelWorker:
def __init__(self, model_path, model_name,
load_8bit, device):
if model_path.endswith('/'):
model_path = model_path[:-1]
if model_name is None:
model_paths = model_path.split('/')
if model_paths[-1].startswith('checkpoint-'):
self.model_name = model_paths[-2] + '_' + model_paths[-1]
else:
self.model_name = model_paths[-1]
else:
self.model_name = model_name
print(f'Loading the model {self.model_name}')
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True, use_fast=False)
tokens_to_keep = ['<box>', '</box>', '<ref>', '</ref>']
tokenizer.additional_special_tokens = [item for item in tokenizer.additional_special_tokens if item not in tokens_to_keep]
self.tokenizer = tokenizer
config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
model_type = config.vision_config.model_type
self.device = torch.cuda.current_device()
if model_type == 'siglip_vision_model':
self.norm_type = 'siglip'
elif model_type == 'MOB':
self.norm_type = 'siglip'
else:
self.norm_type = 'imagenet'
if any(x in model_path.lower() for x in ['34b']):
device_map = split_model(model_path, self.device)
else:
device_map = None
if device_map is not None:
self.model = AutoModel.from_pretrained(model_path, torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
device_map=device_map,
trust_remote_code=True,
load_in_8bit=load_8bit).eval()
else:
self.model = AutoModel.from_pretrained(model_path, torch_dtype=torch.bfloat16,
trust_remote_code=True,
load_in_8bit=load_8bit).eval()
if not load_8bit and device_map is None:
self.model = self.model.to(device)
self.load_8bit = load_8bit
self.model_path = model_path
self.image_size = self.model.config.force_image_size
self.context_len = tokenizer.model_max_length
self.per_tile_len = 256
def reload_model(self):
del self.model
torch.cuda.empty_cache()
if self.device == 'auto':
os.environ['CUDA_LAUNCH_BLOCKING'] = '1'
# This can make distributed deployment work properly
self.model = AutoModel.from_pretrained(
self.model_path,
load_in_8bit=self.load_8bit,
torch_dtype=torch.bfloat16,
device_map=self.device_map,
trust_remote_code=True).eval()
else:
self.model = AutoModel.from_pretrained(
self.model_path,
load_in_8bit=self.load_8bit,
torch_dtype=torch.bfloat16,
trust_remote_code=True).eval()
if not self.load_8bit and not self.device == 'auto':
self.model = self.model.cuda()
@torch.inference_mode()
def generate(self, params):
system_message = params['prompt'][0]['content']
send_messages = params['prompt'][1:]
max_input_tiles = params['max_input_tiles']
temperature = params['temperature']
top_p = params['top_p']
max_new_tokens = params['max_new_tokens']
repetition_penalty = params['repetition_penalty']
video_frame_num = params.get('video_frame_num', 64)
do_sample = True if temperature > 0.0 else False
global_image_cnt = 0
history, pil_images, max_input_tile_list = [], [], []
for message in send_messages:
if message['role'] == 'user':
prefix = ''
if 'image' in message:
for image_data in message['image']:
pil_images.append(load_image(image_data))
prefix = prefix + f'<image {global_image_cnt + 1}><image>\n'
global_image_cnt += 1
max_input_tile_list.append(max_input_tiles)
if 'video' in message:
for video_data in message['video']:
video_frames, tmp_prefix = load_video(video_data, num_frames=video_frame_num)
pil_images.extend(video_frames)
prefix = prefix + tmp_prefix
global_image_cnt += len(video_frames)
max_input_tile_list.extend([1] * len(video_frames))
content = prefix + message['content']
history.append([content, ])
else:
history[-1].append(message['content'])
question, history = history[-1][0], history[:-1]
if global_image_cnt == 1:
question = question.replace('<image 1><image>\n', '<image>\n')
history = [[item[0].replace('<image 1><image>\n', '<image>\n'), item[1]] for item in history]
try:
assert len(max_input_tile_list) == len(pil_images), 'The number of max_input_tile_list and pil_images should be the same.'
except Exception as e:
from IPython import embed; embed()
exit()
print(f'Error: {e}')
print(f'max_input_tile_list: {max_input_tile_list}, pil_images: {pil_images}')
# raise e
old_system_message = self.model.system_message
self.model.system_message = system_message
transform = build_transform(input_size=self.image_size, norm_type=self.norm_type)
if len(pil_images) > 0:
max_input_tiles_limited_by_contect = params['max_input_tiles']
while True:
image_tiles = []
for current_max_input_tiles, pil_image in zip(max_input_tile_list, pil_images):
if self.model.config.dynamic_image_size:
tiles = dynamic_preprocess(
pil_image, image_size=self.image_size, max_num=min(current_max_input_tiles, max_input_tiles_limited_by_contect),
use_thumbnail=self.model.config.use_thumbnail)
else:
tiles = [pil_image]
image_tiles += tiles
if (len(image_tiles) * self.per_tile_len < self.context_len):
break
else:
max_input_tiles_limited_by_contect -= 2
if max_input_tiles_limited_by_contect < 1:
break
pixel_values = [transform(item) for item in image_tiles]
pixel_values = torch.stack(pixel_values).to(self.model.device, dtype=torch.bfloat16)
print(f'Split images to {pixel_values.shape}')
else:
pixel_values = None
generation_config = dict(
num_beams=1,
max_new_tokens=max_new_tokens,
do_sample=do_sample,
temperature=temperature,
repetition_penalty=repetition_penalty,
max_length=self.context_len,
top_p=top_p,
)
response = self.model.chat(
tokenizer=self.tokenizer,
pixel_values=pixel_values,
question=question,
history=history,
return_history=False,
generation_config=generation_config,
)
self.model.system_message = old_system_message
return {'text': response, 'error_code': 0}
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--model-path', type=str, default='nvidia/Eagle2-9B')
parser.add_argument('--model-name', type=str, default='Eagle2-9B')
parser.add_argument('--device', type=str, default='cuda')
parser.add_argument('--load-8bit', action='store_true')
args = parser.parse_args()
print(f'args: {args}')
worker = ModelWorker(
args.model_path,
args.model_name,
args.load_8bit,
args.device)
2. 準備提示信息
- 單張圖像輸入
prompt = [
{'role': 'system', 'content': 'You are a helpful assistant.'},
{'role': 'user', 'content': 'Describe this image in details.',
'image':[
{'url': 'https://www.nvidia.com/content/dam/en-zz/Solutions/about-nvidia/logo-and-brand/01-nvidia-logo-vert-500x200-2c50-d@2x.png'}
],
}
]
- 多張圖像輸入
prompt = [
{'role': 'system', 'content': 'You are a helpful assistant.'},
{'role': 'user', 'content': 'Describe these two images in details.',
'image':[
{'url': 'https://www.nvidia.com/content/dam/en-zz/Solutions/about-nvidia/logo-and-brand/01-nvidia-logo-vert-500x200-2c50-d@2x.png'},
{'url': 'https://www.nvidia.com/content/dam/en-zz/Solutions/about-nvidia/logo-and-brand/01-nvidia-logo-vert-500x200-2c50-d@2x.png'}
],
}
]
- 視頻輸入
prompt = [
{'role': 'system', 'content': 'You are a helpful assistant.'},
{'role': 'user', 'content': 'Describe this video in details.',
'video':[
'path/to/your/video.mp4'
],
}
]
3. 生成響應
params = {
'prompt': prompt,
'max_input_tiles': 24,
'temperature': 0.7,
'top_p': 1.0,
'max_new_tokens': 4096,
'repetition_penalty': 1.0,
}
worker.generate(params)
✨ 主要特性
我們很高興發佈最新的Eagle2系列視覺語言模型。開源視覺語言模型(VLM)在縮小與專有模型的差距方面取得了顯著進展。然而,關於數據策略和實現的關鍵細節往往缺失,限制了可重複性和創新。在這個項目中,我們從以數據為中心的角度專注於VLM的後訓練,分享從頭構建有效數據策略的見解。通過將這些策略與強大的訓練方法和模型設計相結合,我們推出了Eagle2,一系列性能出色的VLM。我們的工作旨在使開源社區能夠以透明的流程開發具有競爭力的VLM。
📦 模型庫
我們提供以下模型:
模型名稱 | 大語言模型 | 視覺模型 | 最大長度 | Hugging Face鏈接 |
---|---|---|---|---|
Eagle2-1B | Qwen2.5-0.5B-Instruct | Siglip | 16K | 🤗 link |
Eagle2-2B | Qwen2.5-1.5B-Instruct | Siglip | 16K | 🤗 link |
Eagle2-9B | Qwen2.5-7B-Instruct | Siglip+ConvNext | 16K | 🤗 link |
📚 詳細文檔
基準測試結果
基準測試 | MiniCPM-Llama3-V-2_5 | InternVL-Chat-V1-5 | InternVL2-8B | QwenVL2-7B | Eagle2-9B |
---|---|---|---|---|---|
模型大小 | 8.5B | 25.5B | 8.1B | 8.3B | 8.9B |
DocVQAtest | 84.8 | 90.9 | 91.6 | 94.5 | 92.6 |
ChartQAtest | - | 83.8 | 83.3 | 83.0 | 86.4 |
InfoVQAtest | - | 72.5 | 74.8 | 74.3 | 77.2 |
TextVQAval | 76.6 | 80.6 | 77.4 | 84.3 | 83.0 |
OCRBench | 725 | 724 | 794 | 845 | 868 |
MMEsum | 2024.6 | 2187.8 | 2210.3 | 2326.8 | 2260 |
RealWorldQA | 63.5 | 66.0 | 64.4 | 70.1 | 69.3 |
AI2Dtest | 78.4 | 80.7 | 83.8 | - | 83.9 |
MMMUval | 45.8 | 45.2 / 46.8 | 49.3 / 51.8 | 54.1 | 56.1 |
MMBench_V11test | 79.5 | 79.4 | 80.6 | ||
MMVetGPT-4-Turbo | 52.8 | 55.4 | 54.2 | 62.0 | 62.2 |
SEED-Image | 72.3 | 76.0 | 76.2 | 77.1 | |
HallBenchavg | 42.4 | 49.3 | 45.2 | 50.6 | 49.3 |
MathVistatestmini | 54.3 | 53.5 | 58.3 | 58.2 | 63.8 |
MMstar | - | - | 60.9 | 60.7 | 62.6 |
📄 許可證
- 代碼根據Apache 2.0許可證發佈。
- 預訓練模型權重根據知識共享署名-非商業性使用 4.0 國際許可協議發佈。
- 該服務僅供研究預覽,僅用於非商業用途,並受以下許可證和條款的約束:
- Qwen2.5-7B-Instruct的模型許可證:Apache-2.0
- PaliGemma的模型許可證:Gemma許可證
📚 引用
待補充
🔧 道德考量
NVIDIA認為可信AI是一項共同責任,我們已經制定了政策和實踐,以支持廣泛的AI應用開發。當開發者按照我們的服務條款下載或使用該模型時,應與內部模型團隊合作,確保該模型滿足相關行業和用例的要求,並解決意外的產品濫用問題。
請在此報告安全漏洞或NVIDIA AI相關問題。
📋 待辦事項
- [ ] 支持vLLM推理
- [ ] 提供AWQ量化權重
- [ ] 提供微調腳本
📊 模型信息
屬性 | 詳情 |
---|---|
模型類型 | 視覺語言模型 |
基礎模型 | google/paligemma-3b-mix-448、Qwen/Qwen2.5-7B-Instruct、google/siglip-so400m-patch14-384、timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k |
基礎模型關係 | 合併 |
語言支持 | 多語言 |
標籤 | eagle、VLM |
許可證 | cc-by-nc-4.0 |
任務類型 | 圖像文本到文本 |
庫名稱 | transformers |
Clip Vit Large Patch14
CLIP是由OpenAI開發的視覺-語言模型,通過對比學習將圖像和文本映射到共享的嵌入空間,支持零樣本圖像分類
圖像生成文本
C
openai
44.7M
1,710
Clip Vit Base Patch32
CLIP是由OpenAI開發的多模態模型,能夠理解圖像和文本之間的關係,支持零樣本圖像分類任務。
圖像生成文本
C
openai
14.0M
666
Siglip So400m Patch14 384
Apache-2.0
SigLIP是基於WebLi數據集預訓練的視覺語言模型,採用改進的sigmoid損失函數,優化了圖像-文本匹配任務。
圖像生成文本
Transformers

S
google
6.1M
526
Clip Vit Base Patch16
CLIP是由OpenAI開發的多模態模型,通過對比學習將圖像和文本映射到共享的嵌入空間,實現零樣本圖像分類能力。
圖像生成文本
C
openai
4.6M
119
Blip Image Captioning Base
Bsd-3-clause
BLIP是一個先進的視覺-語言預訓練模型,擅長圖像描述生成任務,支持條件式和非條件式文本生成。
圖像生成文本
Transformers

B
Salesforce
2.8M
688
Blip Image Captioning Large
Bsd-3-clause
BLIP是一個統一的視覺-語言預訓練框架,擅長圖像描述生成任務,支持條件式和無條件式圖像描述生成。
圖像生成文本
Transformers

B
Salesforce
2.5M
1,312
Openvla 7b
MIT
OpenVLA 7B是一個基於Open X-Embodiment數據集訓練的開源視覺-語言-動作模型,能夠根據語言指令和攝像頭圖像生成機器人動作。
圖像生成文本
Transformers 英語

O
openvla
1.7M
108
Llava V1.5 7b
LLaVA 是一款開源多模態聊天機器人,基於 LLaMA/Vicuna 微調,支持圖文交互。
圖像生成文本
Transformers

L
liuhaotian
1.4M
448
Vit Gpt2 Image Captioning
Apache-2.0
這是一個基於ViT和GPT2架構的圖像描述生成模型,能夠為輸入圖像生成自然語言描述。
圖像生成文本
Transformers

V
nlpconnect
939.88k
887
Blip2 Opt 2.7b
MIT
BLIP-2是一個視覺語言模型,結合了圖像編碼器和大型語言模型,用於圖像到文本的生成任務。
圖像生成文本
Transformers 英語

B
Salesforce
867.78k
359
精選推薦AI模型
Llama 3 Typhoon V1.5x 8b Instruct
專為泰語設計的80億參數指令模型,性能媲美GPT-3.5-turbo,優化了應用場景、檢索增強生成、受限生成和推理任務
大型語言模型
Transformers 支持多種語言

L
scb10x
3,269
16
Cadet Tiny
Openrail
Cadet-Tiny是一個基於SODA數據集訓練的超小型對話模型,專為邊緣設備推理設計,體積僅為Cosmo-3B模型的2%左右。
對話系統
Transformers 英語

C
ToddGoldfarb
2,691
6
Roberta Base Chinese Extractive Qa
基於RoBERTa架構的中文抽取式問答模型,適用於從給定文本中提取答案的任務。
問答系統 中文
R
uer
2,694
98