🚀 T-Systems ColQwen2-7B: Visual Retriever based on Qwen2-VL-7B-Instruct with ColBERT strategy
This project presents a visual retriever, T-Systems ColQwen2-7B, which is based on the Qwen2-VL-7B-Instruct model and adopts the ColBERT strategy. It efficiently indexes documents from their visual features, offering a novel solution for visual document retrieval.
🚀 Quick Start
To start using the T-Systems ColQwen2-7B model, make sure you have installed the necessary dependencies. The colpali-engine
should be installed from source or with a version superior to 0.3.4, and the transformers
version must be > 4.46.1. You can install colpali-engine
using the following command:
pip install git+https://github.com/illuin-tech/colpali
✨ Features
- Novel Architecture: Based on a novel model architecture and training strategy for efficient document indexing from visual features.
- Multi-vector Representation: Generates ColBERT-style multi-vector representations of text and images.
- Dynamic Image Resolution: Takes dynamic image resolutions in input without resizing, with a maximal resolution set to create at most 768 image patches.
📦 Installation
To install the necessary dependencies, run the following command:
pip install git+https://github.com/illuin-tech/colpali
💻 Usage Examples
Basic Usage
import torch
from PIL import Image
from colpali_engine.models import ColQwen2, ColQwen2Processor
model = ColQwen2.from_pretrained(
"tsystems/colqwen2-7b-v1.0",
torch_dtype=torch.bfloat16,
device_map="cuda:0",
).eval()
processor = ColQwen2Processor.from_pretrained("tsystems/colqwen2-7b-v1.0")
images = [
Image.new("RGB", (32, 32), color="white"),
Image.new("RGB", (16, 16), color="black"),
]
queries = [
"Is attention really all you need?",
"What is the amount of bananas farmed in Salvador?",
]
batch_images = processor.process_images(images).to(model.device)
batch_queries = processor.process_queries(queries).to(model.device)
with torch.no_grad():
image_embeddings = model(**batch_images)
query_embeddings = model(**batch_queries)
scores = processor.score_multi_vector(query_embeddings, image_embeddings)
📚 Documentation
Version specificity
This model takes dynamic image resolutions in input and does not resize them, changing their aspect ratio as in ColPali. The maximal resolution is set so that 768 image patches are created at most. Experiments show clear improvements with larger amounts of image patches, at the cost of memory requirements. This version is trained with colpali-engine==0.3.4
. The data used is the same as the ColPali data described in the paper, and the fine-tune has been carried out with the ShareGPT4V (https://sharegpt4v.github.io/) dataset.
Model Training
Parameters
We train models using low-rank adapters (LoRA) with alpha = 64
and r = 64
on the transformer layers from the language model, as well as the final randomly initialized projection layer. We use a paged_adamw_8bit
optimizer. The training is conducted on an 8xH100 GPU setup with distributed data parallelism (via accelerate), a learning rate of 2e - 4 with linear decay and 1% warmup steps, a batch size per device of 64, and in bfloat16
format.
Limitations
- Focus: The model primarily focuses on PDF - type documents and high - resources languages, potentially limiting its generalization to other document types or less represented languages.
- Support: The model relies on multi - vector retrieving derived from the ColBERT late interaction mechanism, which may require engineering efforts to adapt to widely used vector retrieval frameworks that lack native multi - vector support.
📄 License
ColQwen2's vision language backbone model (Qwen2 - VL) is under the apache2.0
license. This fine - tuned adapter is under the CC BY NC 4.0 license. Therefore, the use of the model is research only at the moment.
📚 Citation
If you use this model from this organization in your research, please cite the original paper as follows:
@misc{faysse2024colpaliefficientdocumentretrieval,
title={ColPali: Efficient Document Retrieval with Vision Language Models},
author={Manuel Faysse and Hugues Sibille and Tony Wu and Bilel Omrani and Gautier Viaud and Céline Hudelot and Pierre Colombo},
year={2024},
eprint={2407.01449},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2407.01449},
}
Property |
Details |
Model Type |
T-Systems ColQwen2-7B, a visual retriever based on Qwen2-VL-7B-Instruct with ColBERT strategy |
Training Data |
vidore/colpali_train_set, tattrongvu/sharegpt4v_vqa_200k_batch1 |
Base Model |
Qwen/Qwen2-VL-7B-Instruct |
Library Name |
peft |
Pipeline Tag |
visual-document-retrieval |
License |
ColQwen2's vision language backbone model (Qwen2-VL) is under apache2.0 license. This fine-tuned adapter is under CC BY NC 4.0 license. |