🚀 LongVA-7B-TPO
本倉庫包含論文 Temporal Preference Optimization for Long-form Video Understanding 中描述的模型。
論文 Temporal Preference Optimization for Long-form Video Understanding 提出的 LongVA-7B-TPO 模型,基於 LongVA-7B 進行了時間偏好優化。LongVA-7B-TPO 模型在一系列基準測試中取得了最先進的性能,與 LongVA-7B 相比,平均性能提升了 2%。



📦 模型信息
屬性 |
詳情 |
模型類型 |
LongVA-7B-TPO |
基礎模型 |
lmms-lab/LongVA-7B |
數據集 |
ruili0/LongVA-TPO-10k |
庫名稱 |
transformers |
任務類型 |
視頻文本到文本 |
📊 評估結果
模型 |
規模 |
LongVideoBench |
MLVU |
VideoMME (短視頻) |
VideoMME (中視頻) |
VideoMME (長視頻) |
VideoMME (平均) |
LongVA-7B [1] |
7B |
51.3 |
58.8 |
61.3/61.6 |
50.4/53.6 |
46.2/47.6 |
52.6/54.3 |
LongVA-TPO (我們的模型) |
7B |
54.2 |
61.7 |
63.1/66.6 |
54.8/55.3 |
47.4/47.9 |
55.1/56.6 |
🚀 快速開始
使用以下代碼開始使用該模型。更多信息請參考我們的 GitHub 倉庫。
基礎用法
from longva.model.builder import load_pretrained_model
from longva.mm_utils import tokenizer_image_token, process_images
from longva.constants import IMAGE_TOKEN_INDEX
from PIL import Image
from decord import VideoReader, cpu
import torch
import numpy as np
torch.manual_seed(0)
model_path = "ruili0/LongVA-TPO"
image_path = "local_demo/assets/lmms-eval.png"
video_path = "local_demo/assets/dc_demo.mp4"
max_frames_num = 16
gen_kwargs = {"do_sample": True, "temperature": 0.5, "top_p": None, "num_beams": 1, "use_cache": True, "max_new_tokens": 1024}
tokenizer, model, image_processor, _ = load_pretrained_model(model_path, None, "llava_qwen", device_map="cuda:0")
prompt = "<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<image>\nDescribe the image in details.<|im_end|>\n<|im_start|>assistant\n"
input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors="pt").unsqueeze(0).to(model.device)
image = Image.open(image_path).convert("RGB")
images_tensor = process_images([image], image_processor, model.config).to(model.device, dtype=torch.float16)
with torch.inference_mode():
output_ids = model.generate(input_ids, images=images_tensor, image_sizes=[image.size], modalities=["image"], **gen_kwargs)
outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0].strip()
print(outputs)
print("-"*50)
prompt = "<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<image>\nGive a detailed caption of the video as if I am blind.<|im_end|>\n<|im_start|>assistant\n"
input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors="pt").unsqueeze(0).to(model.device)
vr = VideoReader(video_path, ctx=cpu(0))
total_frame_num = len(vr)
uniform_sampled_frames = np.linspace(0, total_frame_num - 1, max_frames_num, dtype=int)
frame_idx = uniform_sampled_frames.tolist()
frames = vr.get_batch(frame_idx).asnumpy()
video_tensor = image_processor.preprocess(frames, return_tensors="pt")["pixel_values"].to(model.device, dtype=torch.float16)
with torch.inference_mode():
output_ids = model.generate(input_ids, images=[video_tensor], modalities=["video"], **gen_kwargs)
outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0].strip()
print(outputs)
📄 許可證
本項目使用了一些數據集和檢查點,這些數據集和檢查點受其各自的原始許可證約束。用戶必須遵守這些原始許可證的所有條款和條件,包括但不限於數據集的 OpenAI 使用條款和基礎語言模型的特定許可證(如 Qwen2 許可證)。本項目不會在原始許可證規定的約束之外施加任何額外的限制。此外,提醒用戶確保其對數據集和檢查點的使用符合所有適用的法律法規。
📚 引用
BibTeX 引用
@article{li2025temporal,
title={Temporal Preference Optimization for Long-Form Video Understanding},
author={Li, Rui and Wang, Xiaohan and Zhang, Yuhui and Wang, Zeyu and Yeung-Levy, Serena},
journal={arXiv preprint arXiv:2501.13919},
year={2025}
}
參考文獻
[1]. Zhang, P., Zhang, K., Li, B., Zeng, G., Yang, J., Zhang, Y., ... & Liu, Z. (2024). Long context transfer from language to vision. arXiv preprint arXiv:2406.16852.