xGen-MM is a series of the latest foundational large multimodal models (LMMs) developed by Salesforce AI Research, building upon the successful design of the BLIP series with foundational enhancements to ensure a more robust and superior model foundation.
xGen-MM-instruct-interleave is the primary instruction model in the xGen-MM series, achieving higher composite scores in both single-image and multi-image benchmarks compared to other versions.
Model Features
Powerful Multimodal Capabilities
Excels in single-image and multi-image benchmarks with higher composite scores than other versions
Enhanced Model Foundation
Incorporates foundational improvements based on the successful design of the BLIP series
Large-Scale Training
Trained extensively on high-quality image caption datasets and interleaved image-text data
Model Capabilities
Image Understanding
Text Generation
Multi-Image Reasoning
Visual Question Answering
Use Cases
Visual Question Answering
Image Content Description
Generates detailed textual descriptions based on input images
Achieved 72.2 points on the SEED-IMG benchmark
Multi-Image Reasoning
Performs joint analysis and reasoning on multiple related images
Achieved 49.7 points on the BLINK benchmark
Education
Scientific Question Answering
Answers scientific questions based on diagrams and images
Achieved 88.3 points on the Sci QA benchmark
Mathematical Problem Solving
Solves problems based on mathematical diagrams
Achieved 39.6 points on the Math Vista benchmark
license: apache-2.0
language:
en
pipeline_tag: image-text-to-text
Model description
xGen-MM is a series of the latest foundational Large Multimodal Models (LMMs) developed by Salesforce AI Research. This series advances upon the successful designs of the BLIP series, incorporating fundamental enhancements that ensure a more robust and superior foundation. These models have been trained at scale on high-quality image caption datasets and interleaved image-text data.
In the v1.5 (08/2024) release, we present a series of XGen-MM models including:
* GPT-4V(gpt-4-1106-preview) results are taken from this third-party leaderborad.
** Model results are tested with our evaluation code for a fair comparison.
Multi-image benchmarks
Model
BLINK
QBench-2
Mantis-eval
GPT-4V †
51.1
73.4
62.7
VILA-1.5-3B†† (3B)
39.8
51.7
41.9
xGen-MM-inst. (4B)
46.6
52.4
42.4
xGen-MM-inst.-interleave (4B)
49.7
75.1
56.7
† GPT-4V results are the numbers reported in each benchmark's original paper.
†† Model results are tested with our evaluation code for a fair comparison.
Our evaluation is implemented based on open-compass/VLMEvalKit. We will create a PR to that repo to support XGen-MM evaluation.
Bias, Risks, Limitations, and Ethical Considerations
The main data sources are from the internet, including webpages,
image stock sites, and curated datasets released by the research community. We have excluded certain data, such as LAION, due to known CSAM concerns.
The model may be subject to bias from the original data source, as well as bias from LLMs and commercial APIs.
We strongly recommend users assess safety and fairness before applying to downstream applications.
License
Our code and weights are released under the Apache 2.0 license.
We thank the authors for their open-source implementations.
Citation
@misc{blip3-xgenmm,
author = {Le Xue, Manli Shu, Anas Awadalla, Jun Wang, An Yan, Senthil Purushwalkam, Honglu Zhou, Viraj Prabhu, Yutong Dai, Michael S Ryoo, Shrikant Kendre, Jieyu Zhang, Can Qin, Shu Zhang, Chia-Chih Chen, Ning Yu, Juntao Tan, Tulika Manoj Awalgaonkar, Shelby Heinecke, Huan Wang, Yejin Choi, Ludwig Schmidt, Zeyuan Chen, Silvio Savarese, Juan Carlos Niebles, Caiming Xiong, Ran Xu},
title = {xGen-MM (BLIP-3): A Family of Open Large Multimodal Models},
year = {2024},
eprint = {2408.08872},
archivePrefix = {arXiv},
primaryClass = {cs.CV},
url = {https://arxiv.org/abs/2408.08872},
}
Troubleshoot
If you missed any packages, please consider the following
Ethical Considerations
This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people’s lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP.