🚀 BLIP:用于统一视觉语言理解和生成的语言 - 图像预训练引导
BLIP是一个在COCO数据集上预训练的图像字幕模型,采用基础架构(带有ViT大主干),可灵活应用于视觉语言理解和生成任务。
🚀 快速开始
此模型可用于有条件和无条件的图像字幕生成。
✨ 主要特性
作者在论文摘要中指出:视觉 - 语言预训练(VLP)提升了许多视觉语言任务的性能。然而,大多数现有预训练模型仅在基于理解或基于生成的任务中表现出色。此外,性能提升主要通过扩展从网络收集的含噪图像 - 文本对数据集来实现,这并非最佳监督来源。本文提出BLIP,一个可灵活迁移到视觉语言理解和生成任务的新VLP框架。BLIP通过引导字幕有效利用含噪网络数据,其中字幕生成器生成合成字幕,过滤器去除含噪字幕。我们在广泛的视觉语言任务上取得了最先进的成果,如图像 - 文本检索(平均召回率@1提高2.7%)、图像字幕生成(CIDEr指标提高2.8%)和视觉问答(VQA分数提高1.6%)。BLIP在零样本迁移到视频语言任务时也展现出强大的泛化能力。代码、模型和数据集均已发布。
 |
图片来自BLIP官方仓库,图片来源:https://github.com/salesforce/BLIP |
💻 使用示例
基础用法
使用PyTorch模型
在CPU上运行模型
点击展开
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration
processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
text = "a photography of"
inputs = processor(raw_image, text, return_tensors="pt")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
inputs = processor(raw_image, return_tensors="pt")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
在GPU上运行模型
全精度
点击展开
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration
processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large").to("cuda")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
text = "a photography of"
inputs = processor(raw_image, text, return_tensors="pt").to("cuda")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
inputs = processor(raw_image, return_tensors="pt").to("cuda")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
半精度 (float16
)
点击展开
import torch
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration
processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large", torch_dtype=torch.float16).to("cuda")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
text = "a photography of"
inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
>>> a woman sitting on the beach with her dog
📚 详细文档
BibTex和引用信息
@misc{https://doi.org/10.48550/arxiv.2201.12086,
doi = {10.48550/ARXIV.2201.12086},
url = {https://arxiv.org/abs/2201.12086},
author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven},
keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
📄 许可证
该模型遵循BSD 3 - 条款许可证。