🚀 llava-phi-3-mini模型
llava-phi-3-mini是一款图像到文本生成的模型,它基于特定的预训练模型和数据集进行微调,能够有效处理图像相关的文本生成任务,为图像理解和信息提取提供了强大的支持。
🚀 快速开始
通过pipeline
进行对话
from transformers import pipeline
from PIL import Image
import requests
model_id = "xtuner/llava-phi-3-mini-hf"
pipe = pipeline("image-to-text", model=model_id, device=0)
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"
image = Image.open(requests.get(url, stream=True).raw)
prompt = "<|user|>\n<image>\nWhat does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud<|end|>\n<|assistant|>\n"
outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 200})
print(outputs)
>>> [{'generated_text': '\nWhat does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud (1) lava'}]
通过纯transformers
进行对话
import requests
from PIL import Image
import torch
from transformers import AutoProcessor, LlavaForConditionalGeneration
model_id = "xtuner/llava-phi-3-mini-hf"
prompt = "<|user|>\n<image>\nWhat are these?<|end|>\n<|assistant|>\n"
image_file = "http://images.cocodataset.org/val2017/000000039769.jpg"
model = LlavaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
).to(0)
processor = AutoProcessor.from_pretrained(model_id)
raw_image = Image.open(requests.get(image_file, stream=True).raw)
inputs = processor(prompt, raw_image, return_tensors='pt').to(0, torch.float16)
output = model.generate(**inputs, max_new_tokens=200, do_sample=False)
print(processor.decode(output[0][2:], skip_special_tokens=True))
>>> What are these? These are two cats sleeping on a pink couch.
复现实验
请参考文档。
✨ 主要特性
📚 详细文档
模型信息
llava-phi-3-mini是一个LLaVA模型,由XTuner基于特定的预训练模型和数据集进行微调得到。
注意:该模型采用HuggingFace LLaVA格式。
相关资源:
模型细节
模型 |
视觉编码器 |
投影器 |
分辨率 |
预训练策略 |
微调策略 |
预训练数据集 |
微调数据集 |
预训练轮数 |
微调轮数 |
LLaVA-v1.5-7B |
CLIP-L |
MLP |
336 |
冻结大语言模型(LLM),冻结视觉模型(ViT) |
全量训练LLM,冻结ViT |
LLaVA-PT (558K) |
LLaVA-Mix (665K) |
1 |
1 |
LLaVA-Llama-3-8B |
CLIP-L |
MLP |
336 |
冻结LLM,冻结ViT |
全量训练LLM,使用低秩自适应(LoRA)训练ViT |
LLaVA-PT (558K) |
LLaVA-Mix (665K) |
1 |
1 |
LLaVA-Llama-3-8B-v1.1 |
CLIP-L |
MLP |
336 |
冻结LLM,冻结ViT |
全量训练LLM,使用LoRA训练ViT |
ShareGPT4V-PT (1246K) |
InternVL-SFT (1268K) |
1 |
1 |
LLaVA-Phi-3-mini |
CLIP-L |
MLP |
336 |
冻结LLM,冻结ViT |
全量训练LLM,全量训练ViT |
ShareGPT4V-PT (1246K) |
InternVL-SFT (1268K) |
1 |
2 |
实验结果
模型 |
MMBench测试(英文) |
MMMU验证集 |
SEED-IMG |
AI2D测试 |
ScienceQA测试 |
HallusionBench准确率 |
POPE |
GQA |
TextVQA |
MME |
MMStar |
LLaVA-v1.5-7B |
66.5 |
35.3 |
60.5 |
54.8 |
70.4 |
44.9 |
85.9 |
62.0 |
58.2 |
1511/348 |
30.3 |
LLaVA-Llama-3-8B |
68.9 |
36.8 |
69.8 |
60.9 |
73.3 |
47.3 |
87.2 |
63.5 |
58.0 |
1506/295 |
38.2 |
LLaVA-Llama-3-8B-v1.1 |
72.3 |
37.1 |
70.1 |
70.0 |
72.9 |
47.7 |
86.4 |
62.6 |
59.0 |
1469/349 |
45.1 |
LLaVA-Phi-3-mini |
69.2 |
41.4 |
70.0 |
69.3 |
73.7 |
49.8 |
87.3 |
61.5 |
57.8 |
1477/313 |
43.7 |
📄 许可证
@misc{2023xtuner,
title={XTuner: A Toolkit for Efficiently Fine-tuning LLM},
author={XTuner Contributors},
howpublished = {\url{https://github.com/InternLM/xtuner}},
year={2023}
}