🚀 视频聊天-R1_7B字幕模型
VideoChat-R1_7B_caption 是一款支持视频文本到文本转换的多模态模型,基于 Qwen/Qwen2-VL-7B-Instruct 基础模型构建,可用于详细描述视频内容。
🚀 快速开始
我们提供了简单的安装示例:
pip install transformers
pip install qwen_vl_utils
然后,你可以使用我们的模型:
💻 使用示例
基础用法
from transformers import Qwen2_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
model_path = "OpenGVLab/VideoChat-R1_7B_caption"
model = Qwen2_VLForConditionalGeneration.from_pretrained(
model_path, torch_dtype="auto", device_map="auto",
attn_implementation="flash_attention_2"
)
processor = AutoProcessor.from_pretrained(model_path)
video_path = "your_video.mp4"
question = "Describe the video in detail."
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": video_path,
"max_pixels": 360 * 420,
"fps": 1.0,
},
{"type": "text", "text": f""""{question} First output the thinking process in <think> </think> tags and then output the final answer in <answer> </answer> tags"""},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
**video_kwargs,
)
inputs = inputs.to("cuda")
generated_ids = model.generate(**inputs, max_new_tokens=512)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
📄 许可证
本项目采用 Apache-2.0 许可证。
✏️ 引用
如果你使用了该模型,请引用以下论文:
@article{li2025videochatr1,
title={VideoChat-R1: Enhancing Spatio-Temporal Perception via Reinforcement Fine-Tuning},
author={Li, Xinhao and Yan, Ziang and Meng, Desen and Dong, Lu and Zeng, Xiangyu and He, Yinan and Wang, Yali and Qiao, Yu and Wang, Yi and Wang, Limin},
journal={arXiv preprint arXiv:2504.06958},
year={2025}
}
相关链接
信息表格
属性 |
详情 |
模型类型 |
视频文本到文本转换 |
基础模型 |
Qwen/Qwen2-VL-7B-Instruct |
指标 |
准确率 |
许可证 |
Apache-2.0 |