Model Overview
Model Features
Model Capabilities
Use Cases
đ InternVideo2-Chat-8B-HD
To further enrich the semantics embedded in InternVideo2 and improve its user-friendliness in human communications, this model tunes InternVideo2 by incorporating it into a VideoLLM.
[đ GitHub] [đ Tech Report]
đ Quick Start
To further enrich the semantics embedded in InternVideo2 and improve its user-friendliness in human communications, we fine-tune InternVideo2 by integrating it into a VideoLLM with a LLM and a video BLIP. We adopt the progressive learning scheme from VideoChat, using InternVideo2 as the video encoder and training a video blip to communicate with an open-sourced LLM. During training, the video encoder will be updated. Detailed training recipes can be found in VideoChat. This model has been trained with HD data.
The base LLM of this model is Mistral-7B. Before using it, please ensure that you have obtained access permission for Mistral-7B. If you haven't, please go to Mistral-7B to obtain permission and add your HF_token
to the environment variables.
⨠Features
- Semantic Enrichment: Enriches the semantics embedded in InternVideo2.
- User-Friendly Communication: Improves user-friendliness in human communications.
- Progressive Learning: Employs the progressive learning scheme from VideoChat.
đĻ Installation
- Apply for permission for this project and the base LLM.
- Set your HF user access token as an environment variable:
export HF_TOKEN=hf_....
If you're unsure how to obtain a token starting with "hf_", refer to How to Get HF User access Token.
3. Ensure you have transformers >= 4.38.0
.
4. Install the required Python packages from pip_requirements.
đģ Usage Examples
Basic Usage
import os
token = os.environ['HF_TOKEN']
import torch
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('OpenGVLab/InternVideo2_chat_8B_HD',
trust_remote_code=True,
use_fast=False,
token=token)
if torch.cuda.is_available():
model = AutoModel.from_pretrained(
'OpenGVLab/InternVideo2_chat_8B_HD',
torch_dtype=torch.bfloat16,
trust_remote_code=True).cuda()
else:
model = AutoModel.from_pretrained(
'OpenGVLab/InternVideo2_chat_8B_HD',
torch_dtype=torch.bfloat16,
trust_remote_code=True)
from decord import VideoReader, cpu
from PIL import Image
import numpy as np
import numpy as np
import decord
from decord import VideoReader, cpu
import torch.nn.functional as F
import torchvision.transforms as T
from torchvision.transforms import PILToTensor
from torchvision import transforms
from torchvision.transforms.functional import InterpolationMode
decord.bridge.set_bridge("torch")
def get_index(num_frames, num_segments):
seg_size = float(num_frames - 1) / num_segments
start = int(seg_size / 2)
offsets = np.array([
start + int(np.round(seg_size * idx)) for idx in range(num_segments)
])
return offsets
def load_video(video_path, num_segments=8, return_msg=False, resolution=224, hd_num=4, padding=False):
vr = VideoReader(video_path, ctx=cpu(0), num_threads=1)
num_frames = len(vr)
frame_indices = get_index(num_frames, num_segments)
mean = (0.485, 0.456, 0.406)
std = (0.229, 0.224, 0.225)
transform = transforms.Compose([
transforms.Lambda(lambda x: x.float().div(255.0)),
transforms.Normalize(mean, std)
])
frames = vr.get_batch(frame_indices)
frames = frames.permute(0, 3, 1, 2)
if padding:
frames = HD_transform_padding(frames.float(), image_size=resolution, hd_num=hd_num)
else:
frames = HD_transform_no_padding(frames.float(), image_size=resolution, hd_num=hd_num)
frames = transform(frames)
# print(frames.shape)
T_, C, H, W = frames.shape
sub_img = frames.reshape(
1, T_, 3, H//resolution, resolution, W//resolution, resolution
).permute(0, 3, 5, 1, 2, 4, 6).reshape(-1, T_, 3, resolution, resolution).contiguous()
glb_img = F.interpolate(
frames.float(), size=(resolution, resolution), mode='bicubic', align_corners=False
).to(sub_img.dtype).unsqueeze(0)
frames = torch.cat([sub_img, glb_img]).unsqueeze(0)
if return_msg:
fps = float(vr.get_avg_fps())
sec = ", ".join([str(round(f / fps, 1)) for f in frame_indices])
# " " should be added in the start and end
msg = f"The video contains {len(frame_indices)} frames sampled at {sec} seconds."
return frames, msg
else:
return frames
def HD_transform_padding(frames, image_size=224, hd_num=6):
def _padding_224(frames):
_, _, H, W = frames.shape
tar = int(np.ceil(H / 224) * 224)
top_padding = (tar - H) // 2
bottom_padding = tar - H - top_padding
left_padding = 0
right_padding = 0
padded_frames = F.pad(
frames,
pad=[left_padding, right_padding, top_padding, bottom_padding],
mode='constant', value=255
)
return padded_frames
_, _, H, W = frames.shape
trans = False
if W < H:
frames = frames.flip(-2, -1)
trans = True
width, height = H, W
else:
width, height = W, H
ratio = width / height
scale = 1
while scale * np.ceil(scale / ratio) <= hd_num:
scale += 1
scale -= 1
new_w = int(scale * image_size)
new_h = int(new_w / ratio)
resized_frames = F.interpolate(
frames, size=(new_h, new_w),
mode='bicubic',
align_corners=False
)
padded_frames = _padding_224(resized_frames)
if trans:
padded_frames = padded_frames.flip(-2, -1)
return padded_frames
def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
best_ratio_diff = float('inf')
best_ratio = (1, 1)
area = width * height
for ratio in target_ratios:
target_aspect_ratio = ratio[0] / ratio[1]
ratio_diff = abs(aspect_ratio - target_aspect_ratio)
if ratio_diff < best_ratio_diff:
best_ratio_diff = ratio_diff
best_ratio = ratio
elif ratio_diff == best_ratio_diff:
if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
best_ratio = ratio
return best_ratio
def HD_transform_no_padding(frames, image_size=224, hd_num=6, fix_ratio=(2,1)):
min_num = 1
max_num = hd_num
_, _, orig_height, orig_width = frames.shape
aspect_ratio = orig_width / orig_height
# calculate the existing video aspect ratio
target_ratios = set(
(i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
i * j <= max_num and i * j >= min_num)
target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
# find the closest aspect ratio to the target
if fix_ratio:
target_aspect_ratio = fix_ratio
else:
target_aspect_ratio = find_closest_aspect_ratio(
aspect_ratio, target_ratios, orig_width, orig_height, image_size)
# calculate the target width and height
target_width = image_size * target_aspect_ratio[0]
target_height = image_size * target_aspect_ratio[1]
blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
# resize the frames
resized_frame = F.interpolate(
frames, size=(target_height, target_width),
mode='bicubic', align_corners=False
)
return resized_frame
video_path = "yoga.mp4"
# sample uniformly 8 frames from the video
video_tensor = load_video(video_path, num_segments=8, return_msg=False, resolution=224, hd_num=6)
video_tensor = video_tensor.to(model.device)
chat_history = []
response, chat_history = model.chat(tokenizer, '', 'Describe the action step by step.', media_type='video', media_tensor=video_tensor, chat_history= chat_history, return_history=True,generation_config={'do_sample':False})
print(response)
response, chat_history = model.chat(tokenizer, '', 'What is she wearing?', media_type='video', media_tensor=video_tensor, chat_history= chat_history, return_history=True,generation_config={'do_sample':False})
đ Documentation
đ Performance
Model | MVBench | VideoMME(w/o sub) |
---|---|---|
InternVideo2-Chat-8B | 60.3 | 41.9 |
InternVideo2-Chat-8B-HD | 65.4 | 46.1 |
InternVideo2-Chat-8B-HD-F16 | 67.5 | 49.4 |
InternVideo2-Chat-8B-InternLM | 61.9 | 49.1 |
âī¸ Citation
If this work is helpful for your research, please consider citing InternVideo and VideoChat.
@article{wang2024internvideo2,
title={Internvideo2: Scaling video foundation models for multimodal video understanding},
author={Wang, Yi and Li, Kunchang and Li, Xinhao and Yu, Jiashuo and He, Yinan and Wang, Chenting and Chen, Guo and Pei, Baoqi and Zheng, Rongkun and Xu, Jilan and Wang, Zun and others},
journal={arXiv preprint arXiv:2403.15377},
year={2024}
}
@article{li2023videochat,
title={Videochat: Chat-centric video understanding},
author={Li, KunChang and He, Yinan and Wang, Yi and Li, Yizhuo and Wang, Wenhai and Luo, Ping and Wang, Yali and Wang, Limin and Qiao, Yu},
journal={arXiv preprint arXiv:2305.06355},
year={2023}
}
đ License
This project is licensed under the MIT license.
â ī¸ Important Note
You agree to not use the model to conduct experiments that cause harm to human subjects.
đĄ Usage Tip
Before using the model, ensure that you have obtained the access permission of Mistral-7B. If not yet obtained, please go to Mistral-7B to obtain the access permission and add your
HF_token
to the environment variable.








