🚀 视频聊天-R1-thinking_7B
VideoChat-R1-thinking_7B 是一款多模态模型,支持视频文本到文本的转换,基于 Qwen/Qwen2.5-VL-7B-Instruct 基础模型,可用于视频相关的问答任务。
🚀 快速开始
安装依赖
我们提供以下简单的安装示例:
pip install transformers
pip install qwen_vl_utils
使用模型
安装完成后,你可以使用以下代码调用我们的模型:
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
model_path = "OpenGVLab/VideoChat-R1-thinking_7B"
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
model_path, torch_dtype="auto", device_map="auto",
attn_implementation="flash_attention_2"
)
processor = AutoProcessor.from_pretrained(model_path)
video_path = "your_video.mp4"
question = "Where is the final cup containing the object?"
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": video_path,
"max_pixels": 360 * 420,
"fps": 1.0,
},
{"type": "text", "text": f"""{question}
Output your thought process within the <think> </think> tags, including analysis with either specific timestamps (xx.xx) or time ranges (xx.xx to xx.xx) in <timestep> </timestep> tags.
Then, provide your final answer within the <answer> </answer> tags.
"""},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
**video_kwargs,
)
inputs = inputs.to("cuda")
generated_ids = model.generate(**inputs, max_new_tokens=512)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
📄 许可证
本项目采用 Apache-2.0 许可证。
✏️ 引用
如果你使用了本项目,请引用以下论文:
@article{li2025videochatr1,
title={VideoChat-R1: Enhancing Spatio-Temporal Perception via Reinforcement Fine-Tuning},
author={Li, Xinhao and Yan, Ziang and Meng, Desen and Dong, Lu and Zeng, Xiangyu and He, Yinan and Wang, Yali and Qiao, Yu and Wang, Yi and Wang, Limin},
journal={arXiv preprint arXiv:2504.06958},
year={2025}
}
项目链接