🚀 Oryx-1.5-7B
Oryx-1.5-7B是一個基於Qwen2.5語言模型的多模態大模型,具有32K token的上下文窗口,在圖像和視頻理解任務上展現出了強大的性能。它能夠無縫且高效地處理任意空間大小和時間長度的視覺輸入。
基礎信息
屬性 |
詳情 |
基礎模型 |
Qwen/Qwen2.5-7B-Instruct |
訓練數據集 |
Oryx-SFT-Data |
支持語言 |
英文、中文 |
許可證 |
apache-2.0 |
任務類型 |
視頻文本到文本 |
庫名稱 |
oryx |
項目鏈接
- 代碼倉庫:https://github.com/Oryx-mllm/Oryx
- 項目主頁:https://oryx-mllm.github.io
- 論文鏈接:https://arxiv.org/abs/2409.12961
🚀 快速開始
我們提供了一個簡單的模型使用示例,更多詳細信息請參考我們的 Github 倉庫。
from oryx.model.builder import load_pretrained_model
from oryx.mm_utils import get_model_name_from_path, process_images, tokenizer_image_token
from oryx.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN, IGNORE_INDEX
from oryx.conversation import conv_templates, SeparatorStyle
from PIL import Image
import requests
import copy
import torch
import sys
import warnings
from decord import VideoReader, cpu
import numpy as np
def load_video(self, video_path, max_frames_num,fps=1,force_sample=False):
if max_frames_num == 0:
return np.zeros((1, 336, 336, 3))
vr = VideoReader(video_path, ctx=cpu(0),num_threads=1)
total_frame_num = len(vr)
video_time = total_frame_num / vr.get_avg_fps()
fps = round(vr.get_avg_fps()/fps)
frame_idx = [i for i in range(0, len(vr), fps)]
frame_time = [i/fps for i in frame_idx]
if len(frame_idx) > max_frames_num or force_sample:
sample_fps = max_frames_num
uniform_sampled_frames = np.linspace(0, total_frame_num - 1, sample_fps, dtype=int)
frame_idx = uniform_sampled_frames.tolist()
frame_time = [i/vr.get_avg_fps() for i in frame_idx]
frame_time = ",".join([f"{i:.2f}s" for i in frame_time])
spare_frames = vr.get_batch(frame_idx).asnumpy()
return spare_frames,frame_time,video_time
pretrained = "THUdyh/Oryx-7B"
model_name = "oryx_qwen"
device = "cuda"
device_map = "auto"
tokenizer, model, image_processor, max_length = load_pretrained_model(pretrained, None, model_name, device_map=device_map)
model.eval()
video_path = ""
max_frames_num = "64"
video,frame_time,video_time = load_video(video_path, max_frames_num, 1, force_sample=True)
video = image_processor.preprocess(video, return_tensors="pt")["pixel_values"].cuda().bfloat16()
video = [video]
video_data = (video, video)
input_data = (video_data, (384, 384), "video")
conv_template = "qwen_1_5"
question = DEFAULT_IMAGE_TOKEN + "\nPlease describe this video in detail."
conv = copy.deepcopy(conv_templates[conv_template])
conv.append_message(conv.roles[0], question)
conv.append_message(conv.roles[1], None)
prompt_question = conv.get_prompt()
input_ids = tokenizer_image_token(prompt_question, tokenizer, IMAGE_TOKEN_INDEX, return_tensors="pt").unsqueeze(0).to(device)
output_ids = model.generate(
inputs=input_ids,
images=input_data[0][0],
images_highres=input_data[0][1],
modalities=video_data[2],
do_sample=False,
temperature=0,
max_new_tokens=128,
use_cache=True,
)
text_outputs = tokenizer.batch_decode(cont, skip_special_tokens=True)
print(text_outputs)
✨ 主要特性
- 多模態處理:能夠無縫且高效地處理任意空間大小和時間長度的視覺輸入。
- 長上下文窗口:基於Qwen2.5語言模型,具有32K token的上下文窗口。
- 多語言支持:支持英文和中文兩種語言。
📚 詳細文檔
模型表現
通用視頻基準測試

長視頻理解

常見圖像基準測試

3D空間理解

模型架構
- 架構:預訓練的 Oryx-ViT + Qwen2.5-7B
- 數據:120萬張圖像/視頻數據的混合
- 精度:BFloat16
硬件與軟件
- 硬件:64 * NVIDIA Tesla A100
- 編排工具:HuggingFace Trainer
- 代碼框架:Pytorch
📄 許可證
本項目採用 apache-2.0
許可證。
📚 引用
如果您使用了本項目的代碼或模型,請引用以下論文:
@article{liu2024oryx,
title={Oryx MLLM: On-Demand Spatial-Temporal Understanding at Arbitrary Resolution},
author={Liu, Zuyan and Dong, Yuhao and Liu, Ziwei and Hu, Winston and Lu, Jiwen and Rao, Yongming},
journal={arXiv preprint arXiv:2409.12961},
year={2024}
}