Model Overview
Model Features
Model Capabilities
Use Cases
๐ mBLIP BLOOMZ-7B
This is a multilingual vision-LLM model checkpoint that can be used for tasks like image captioning and visual question answering in 96 languages.
๐ Quick Start
This README provides detailed information about the mBLIP BLOOMZ - 7B model, including its description, usage scenarios, and citation details.
โจ Features
- Multilingual Support: mBLIP can handle tasks such as image captioning and visual question answering in 96 languages.
- Task Adaptability: Suitable for zero - shot conditional text generation and can be fine - tuned for downstream applications.
๐ฆ Installation
No specific installation steps are provided in the original document.
๐ป Usage Examples
Basic Usage
For code examples, we refer to the BLIP - 2 documentation.
Running the model on CPU
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Gregor/mblip-bloomz-7b")
model = Blip2ForConditionalGeneration.from_pretrained("Gregor/mblip-bloomz-7b")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "Describe the image in German."
inputs = processor(raw_image, question, return_tensors="pt")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
Running the model on GPU
In full precision
# pip install accelerate
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Gregor/mblip-bloomz-7b")
model = Blip2ForConditionalGeneration.from_pretrained("Gregor/mblip-bloomz-7b", device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "Describe the image in German."
inputs = processor(raw_image, question, return_tensors="pt").to("cuda")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
In half precision (bfloat16
)
# pip install accelerate
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Gregor/mblip-bloomz-7b")
model = Blip2ForConditionalGeneration.from_pretrained("Gregor/mblip-bloomz-7b", torch_dtype=torch.bfloat16, device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "Describe the image in German."
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.bfloat16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
In 8 - bit precision (int8
)
โ ๏ธ Important Note
Paper results only use int8 for the LLM weights while this loads all weights in int8. We see that this gives slightly worse results but currently int8 for only some model parts is not supported by HuggingFace.
# pip install accelerate bitsandbytes
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Gregor/mblip-bloomz-7b")
model = Blip2ForConditionalGeneration.from_pretrained("Gregor/mblip-bloomz-7b", load_in_8bit=True, device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "Describe the image in German."
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.bfloat16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
In 4 - bit precision (int4
)
โ ๏ธ Important Note
Paper results only use int4 for the LLM weights while this loads all weights in int8. We see that this gives slightly worse results but currently int4 for only some model parts is not supported by HuggingFace.
# pip install accelerate bitsandbytes
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Gregor/mblip-bloomz-7b")
model = Blip2ForConditionalGeneration.from_pretrained("Gregor/mblip-bloomz-7b",
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=False,
bnb_4bit_compute_dtype=torch.bfloat16,
device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "Describe the image in German."
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.bfloat16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
๐ Documentation
Model description
mBLIP is a BLIP - 2 model which consists of 3 sub - models: a Vision Transformer (ViT), a Query - Transformer (Q - Former) and a large language model (LLM).
The Q - Former and ViT have both been initialized by an English BLIP - 2 checkpoint (blip2 - flan - t5 - xl) and then re - aligned to the multilingual LLM (bloomz - 7b1) using a multilingual task mixture.
This allows the model to be used for tasks like:
- image captioning
- visual question answering (VQA)
in 96 languages.
Languages
mBLIP was trained on the following 96 languages:
af, am, ar, az, be, bg, bn, ca, ceb, cs, cy, da, de, el, en, eo, es, et, eu, fa, fi, fil, fr, ga, gd, gl, gu, ha, hi, ht, hu, hy, id, ig, is, it, iw, ja, jv, ka, kk, km, kn, ko, ku, ky, lb, lo, lt, lv, mg, mi, mk, ml, mn, mr, ms, mt, my, ne, nl, no, ny, pa, pl, ps, pt, ro, ru, sd, si, sk, sl, sm, sn, so, sq, sr, st, su, sv, sw, ta, te, tg, th, tr, uk, ur, uz, vi, xh, yi, yo, zh, zu
Direct Use and Downstream Use
You can use the raw model for conditional text generation given an image and prompt text in a zero - shot setup or alternatively finetune it for downstream applications. We strongly recommend LoRA applied to the LLM when finetuning and to use bf16 as data type - standard fp16 can cause NaN loss.
See our repository for the code used to train and finetune this model.
When using batched input, use left padding!
Bias, Risks, Limitations, and Ethical Considerations
While mBLIP can work in theory with up to 100 languages, in practice, we expect best results when prompted in high - resource languages like English, German, Spanish, etc.
mBLIP inherits the risk, limitations, and biases from the models used to initialize it. mBLIP has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context theyโre being deployed within.
๐ง Technical Details
The mBLIP model is based on the BLIP - 2 architecture, which combines a Vision Transformer, a Query - Transformer, and a large language model. The alignment process between the English BLIP - 2 checkpoint and the multilingual LLM is achieved through a multilingual task mixture.
๐ License
This model is licensed under the MIT license.
๐ Citation
If you use our model, please cite the following:
@article{geigle2023mblip,
author = {Gregor Geigle and
Abhay Jain and
Radu Timofte and
Goran Glava\v{s}},
title = {mBLIP: Efficient Bootstrapping of Multilingual Vision-LLMs},
journal = {arXiv},
volume = {abs/2307.06930},
year = {2023},
url = {https://arxiv.org/abs/2307.06930},
eprinttype = {arXiv},
eprint = {2307.06930},
}






