đ Model Card for MMICL
MMICL is a multimodal vision - language model that combines blip2/instrcutblip. It can analyze and understand multiple images and follow instructions, achieving excellent results on complex visual reasoning datasets.
đ Quick Start
News
- [09 - 19] We have converted the MMICL demo to a permanent link: Demo for MMICL. The Vicuna version of MMICL and Chat Mode are presently under development, so they may require careful adjustment of generation parameters and may not work correctly.
- [09 - 15] Our paper has been uploaded to arXiv.
- [09 - 01] The MIC data has released on the huggingface hub.
- [08 - 23] Reach the 1st on MME, 1st on MMBench
- [08 - 21] The MMICL - FLANT5XXL and MMICL - Tiny model has released on the huggingface hub.
Temporal Demo for MMICL
Playground for MMICL - FLANT5XXL
It supports multi - image input as well as video input.
⨠Features
Model Details
MMICL(Multi - Modal In - Context Learning) is a multimodal vision - language model that incorporates blip2/instrcutblip. It can analyze and understand multiple images and follow instructions.
Model Description
MMICL outperforms the VL model of the same size and performs exceptionally well on complex visual reasoning datasets. Till 21st Aug. 2023, it achieves state - of - the - art performance on both multimodal task leaderboards and a wide range of vision - language tasks. Furthermore, it showcases new capabilities in video understanding and multimodal in - context learning (M - ICL).
- Capability of multiple images refering and reasoning
- Manually constructed In - context instruction tuning dataset
- Till 21st Aug. 2023 1st on MME, 1st on MMBench
- Visual Encoder: VIT - L from CLIP/ ViT - G/14 from EVA - CLIP
- Pre - trained LLM: FlanT5 - XL/ FlanT5 - XXL/ Vicuna - 7B/ Vicuna - 13B
Model Information
đĻ Installation
This section is not provided in the original README, so it is skipped.
đģ Usage Examples
Basic Usage
The images are shown in our github repo MMICL
from model.instructblip import InstructBlipConfig, InstructBlipModel, InstructBlipPreTrainedModel,InstructBlipForConditionalGeneration,InstructBlipProcessor
import datasets
import json
import transformers
from PIL import Image
import torch
model_type="instructblip"
model_ckpt="BleachNick/MMICL-Instructblip-T5-xxl"
processor_ckpt = "Salesforce/instructblip-flan-t5-xxl"
config = InstructBlipConfig.from_pretrained(model_ckpt )
if 'instructblip' in model_type:
model = InstructBlipForConditionalGeneration.from_pretrained(
model_ckpt,
config=config).to('cuda:0',dtype=torch.bfloat16)
image_palceholder="åž"
sp = [image_palceholder]+[f"<image{i}>" for i in range(20)]
processor = InstructBlipProcessor.from_pretrained(
processor_ckpt
)
sp = sp+processor.tokenizer.additional_special_tokens[len(sp):]
processor.tokenizer.add_special_tokens({'additional_special_tokens':sp})
if model.qformer.embeddings.word_embeddings.weight.shape[0] != len(processor.qformer_tokenizer):
model.qformer.resize_token_embeddings(len(processor.qformer_tokenizer))
replace_token="".join(32*[image_palceholder])
image = Image.open ("images/cal_num1.png")
image1 = Image.open ("images/cal_num2.png")
image2 = Image.open ("images/cal_num3.png")
images = [image,image1,image2]
prompt = [f'Use the image 0: <image0>{replace_token},image 1: <image1>{replace_token} and image 2: <image2>{replace_token} as a visual aid to help you calculate the equation accurately. image 0 is 2+1=3.\nimage 1 is 5+6=11.\nimage 2 is"']
prompt = " ".join(prompt)
inputs = processor(images=images, text=prompt, return_tensors="pt")
inputs['pixel_values'] = inputs['pixel_values'].to(torch.bfloat16)
inputs['img_mask'] = torch.tensor([[1 for i in range(len(images))]])
inputs['pixel_values'] = inputs['pixel_values'].unsqueeze(0)
inputs = inputs.to('cuda:0')
outputs = model.generate(
pixel_values = inputs['pixel_values'],
input_ids = inputs['input_ids'],
attention_mask = inputs['attention_mask'],
img_mask = inputs['img_mask'],
do_sample=False,
max_length=50,
min_length=1,
set_min_padding_size =False,
)
generated_text = processor.batch_decode(outputs, skip_special_tokens=True)[0].strip()
print(generated_text)
đ Documentation
Training Hyperparameters
- Training regime: [fp32, bf16 mixed precision, bf16 non - mixed precision]
đ§ Technical Details
This section is not provided in the original README, so it is skipped.
đ License
The model is released under the MIT license.