đ đ Imp
The Imp project offers a family of highly capable yet lightweight Large Multimodal Models (LMMs).
đ Quick Start
The Imp project aims to provide a family of highly capable yet lightweight LMMs. Our Imp-v1.5-4B-Phi3
is a strong lightweight LMM with only 4B parameters. It is built upon Phi-3 (3.8B) and a powerful visual encoder SigLIP (0.4B), and trained on 1M mixed dataset.
We release our model weights and provide an example below to run our model. Detailed technical report and corresponding training/evaluation code will be released soon on our GitHub repo. We will persistently improve our model and release the next versions to further improve model performance :)
⨠Features
- Provide a family of highly capable yet lightweight LMMs.
- The
Imp-v1.5-4B-Phi3
model has only 4B parameters, built on Phi - 3 and SigLIP, and trained on 1M mixed dataset.
- Release model weights and offer an example for model inference.
đĻ Installation
Install dependencies
pip install transformers
pip install -q pillow accelerate einops
đģ Usage Examples
Basic Usage
You can use the following code for model inference. The format of text instruction is similar to LLaVA. Note that the example can only be run on GPUs currently.
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from PIL import Image
torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained(
"MILVLG/Imp-v1.5-4B-Phi3/",
torch_dtype=torch.float16,
device_map="auto",
trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("MILVLG/Imp-v1.5-4B-Phi3", trust_remote_code=True)
text = "<|user|>\n<image>\nWhat are the colors of the bus in the image?\n<|end|>\n<|assistant|>\n"
image = Image.open("images/bus.jpg")
input_ids = tokenizer(text, return_tensors='pt').input_ids
image_tensor = model.image_preprocess(image)
output_ids = model.generate(
input_ids,
max_new_tokens=100,
images=image_tensor,
use_cache=True)[0]
print(tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True).strip())
đ Documentation
We conduct evaluation on 9 commonly - used benchmarks, including 5 academic VQA benchmarks and 4 popular MLLM benchmarks, to compare our Imp model with LLaVA (7B) and existing lightweight LMMs of similar model sizes.
Property |
Details |
Model Type |
Imp-v1.5-4B-Phi3 |
Training Data |
liuhaotian/LLaVA-Pretrain, liuhaotian/LLaVA-Instruct-150K |
Models |
Size |
VQAv2 |
GQA |
SQA(IMG) |
TextVQA |
POPE |
MME(P) |
MMB |
MMB_CN |
MM-Vet |
Bunny-v1.0-4B |
4B |
81.5 |
63.5 |
75.1 |
- |
86.7 |
1495.2 |
73.5 |
- |
- |
Imp-v1.5-4B-Phi3 |
4B |
81.5 |
63.5 |
78.3 |
60.2 |
86.9 |
1507.7 |
73.3 |
61.1 |
44.6 |
đ License
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
đ Citation
If you use our model or refer our work in your studies, please cite:
@article{imp2024,
title={Imp: Highly Capable Large Multimodal Models for Mobile Devices},
author={Shao, Zhenwei and Yu, Zhou and Yu, Jun and Ouyang, Xuecheng and Zheng, Lihao and Gai, Zhenbiao and Wang, Mingyang and Ding, Jiajun},
journal={arXiv preprint arXiv:2405.12107},
year={2024}
}