🚀 HelpingAI-Vision
HelpingAI-Vision是一款图像文本生成模型,它基于HelpingAI-Lite并结合LLaVA适配器,能更细致地理解图像场景,适用于聊天等场景。
🚀 快速开始
你可以点击下面的按钮在Google Colab中打开项目:
✨ 主要特性
HelpingAI-Vision的核心原理是为图像的每N部分生成一个标记嵌入,而非为整个图像生成N个视觉标记嵌入。这种基于HelpingAI-Lite并结合LLaVA适配器的方法,旨在通过捕捉更详细的信息来增强场景理解能力。
对于图像的每个裁剪部分,使用完整的SigLIP编码器生成一个嵌入(大小为[1, 1152])。随后,所有N个嵌入通过LLaVA适配器进行处理,得到大小为[N, 2560]的标记嵌入。目前,这些标记缺乏其在原始图像中位置的明确信息,计划在后续更新中加入位置信息。
HelpingAI-Vision是从MC-LLaVA-3b微调而来的。该模型采用ChatML提示格式,这表明它在基于聊天的场景中具有潜在的应用价值。
📦 安装指南
安装依赖
!pip install -q open_clip_torch timm einops
下载模型文件
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="OEvortex/HelpingAI-Vision", filename="configuration_llava.py", local_dir="./", force_download=True)
hf_hub_download(repo_id="OEvortex/HelpingAI-Vision", filename="configuration_phi.py", local_dir="./", force_download=True)
hf_hub_download(repo_id="OEvortex/HelpingAI-Vision", filename="modeling_llava.py", local_dir="./", force_download=True)
hf_hub_download(repo_id="OEvortex/HelpingAI-Vision", filename="modeling_phi.py", local_dir="./", force_download=True)
hf_hub_download(repo_id="OEvortex/HelpingAI-Vision", filename="processing_llava.py", local_dir="./", force_download=True)
💻 使用示例
基础用法
from modeling_llava import LlavaForConditionalGeneration
import torch
model = LlavaForConditionalGeneration.from_pretrained("OEvortex/HelpingAI-Vision", torch_dtype=torch.float16)
model = model.to("cuda")
from transformers import AutoTokenizer
from processing_llava import LlavaProcessor, OpenCLIPImageProcessor
tokenizer = AutoTokenizer.from_pretrained("OEvortex/HelpingAI-Vision")
image_processor = OpenCLIPImageProcessor(model.config.preprocess_config)
processor = LlavaProcessor(image_processor, tokenizer)
from PIL import Image
import requests
image_file = "https://images.unsplash.com/photo-1439246854758-f686a415d9da"
raw_image = Image.open(requests.get(image_file, stream=True).raw)
prompt = """<|im_start|>system
A chat between a curious human and an artificial intelligence assistant.
The assistant gives helpful, detailed, and polite answers to the human's questions.
The assistant does not hallucinate and pays very close attention to the details.<|im_end|>
<|im_start|>user
<image>
Describe the image.<|im_end|>
<|im_start|>assistant
"""
with torch.inference_mode():
inputs = processor(prompt, raw_image, model, return_tensors='pt')
inputs['input_ids'] = inputs['input_ids'].to(model.device)
inputs['attention_mask'] = inputs['attention_mask'].to(model.device)
from transformers import TextStreamer
streamer = TextStreamer(tokenizer)
%%time
with torch.inference_mode():
output = model.generate(**inputs, max_new_tokens=200, do_sample=True, top_p=0.9, temperature=1.2, eos_token_id=tokenizer.eos_token_id, streamer=streamer)
print(tokenizer.decode(output[0]).replace(prompt, "").replace("<|im_end|>", ""))
📚 详细文档
模型采用以下ChatML提示格式,可用于聊天场景:
<|im_start|>system
You are Vortex, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
📄 许可证
本项目采用其他许可证,许可证名称为hsul,你可以通过此链接查看详细的许可证信息。
信息表格
属性 |
详情 |
模型类型 |
图像文本生成模型 |
训练数据 |
未提及 |
微调基础模型 |
MC-LLaVA-3b |
提示格式 |
ChatML |