🚀 [E5-V:基於多模態大語言模型的通用嵌入]
E5-V是一個用於實現多模態嵌入的框架,它基於MLLMs進行適配,有效彌合了不同類型輸入之間的模態差距,即使在未微調的情況下,也能在多模態嵌入任務中展現出強大性能。同時,其單模態訓練方法僅在文本對上進行訓練,表現優於多模態訓練。
🚀 快速開始
E5-V基於lmms-lab/llama3-llava-next-8b
進行微調。我們提出了名為E5-V的框架,用於適配MLLMs以實現多模態嵌入。E5-V有效地彌合了不同類型輸入之間的模態差距,即使在未進行微調的情況下,也能在多模態嵌入中展現出強大的性能。我們還為E5-V提出了一種單模態訓練方法,該模型僅在文本對上進行訓練,其性能優於多模態訓練。
更多詳細信息可查看:https://github.com/kongds/E5-V
💻 使用示例
基礎用法
import torch
import torch.nn.functional as F
import requests
from PIL import Image
from transformers import AutoTokenizer
from transformers import LlavaNextProcessor, LlavaNextForConditionalGeneration
llama3_template = '<|start_header_id|>user<|end_header_id|>\n\n{}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n \n'
processor = LlavaNextProcessor.from_pretrained('royokong/e5-v')
model = LlavaNextForConditionalGeneration.from_pretrained('royokong/e5-v', torch_dtype=torch.float16).cuda()
img_prompt = llama3_template.format('<image>\nSummary above image in one word: ')
text_prompt = llama3_template.format('<sent>\nSummary above sentence in one word: ')
urls = ['https://upload.wikimedia.org/wikipedia/commons/thumb/4/47/American_Eskimo_Dog.jpg/360px-American_Eskimo_Dog.jpg',
'https://upload.wikimedia.org/wikipedia/commons/thumb/b/b6/Felis_catus-cat_on_snow.jpg/179px-Felis_catus-cat_on_snow.jpg']
images = [Image.open(requests.get(url, stream=True).raw) for url in urls]
texts = ['A dog sitting in the grass.',
'A cat standing in the snow.']
text_inputs = processor([text_prompt.replace('<sent>', text) for text in texts], return_tensors="pt", padding=True).to('cuda')
img_inputs = processor([img_prompt]*len(images), images, return_tensors="pt", padding=True).to('cuda')
with torch.no_grad():
text_embs = model(**text_inputs, output_hidden_states=True, return_dict=True).hidden_states[-1][:, -1, :]
img_embs = model(**img_inputs, output_hidden_states=True, return_dict=True).hidden_states[-1][:, -1, :]
text_embs = F.normalize(text_embs, dim=-1)
img_embs = F.normalize(img_embs, dim=-1)
print(text_embs @ img_embs.t())