Colsmolvlm V0.1
Model Overview
Model Features
Model Capabilities
Use Cases
๐ ColSmolVLM-v0.1: Visual Retriever based on SmolVLM-Instruct with ColBERT strategy
ColSmolVLM is a model based on a novel model architecture and training strategy of Vision Language Models (VLMs). It can efficiently index documents from their visual features. It's a SmolVLM extension that generates ColBERT-style multi-vector representations of text and images. It was introduced in the paper ColPali: Efficient Document Retrieval with Vision Language Models and first released in this repository. This version is trained with a batch size of 128 for 3 epochs, and it's the untrained base version to guarantee deterministic projection layer initialization.
๐ Quick Start
Prerequisites
Make sure colpali-engine
is installed from source or with a version superior to 0.3.5 (main branch from the repo currently). transformers
version must be > 4.46.2.
pip install git+https://github.com/illuin-tech/colpali
Code Example
import torch
from PIL import Image
from colpali_engine.models import ColIdefics3, ColIdefics3Processor
model = ColIdefics3.from_pretrained(
"vidore/colsmolvlm-v0.1",
torch_dtype=torch.bfloat16,
device_map="cuda:0",
attn_implementation="flash_attention_2" # or eager
).eval()
processor = ColIdefics3Processor.from_pretrained("vidore/colsmolvlm-v0.1")
# Your inputs
images = [
Image.new("RGB", (32, 32), color="white"),
Image.new("RGB", (16, 16), color="black"),
]
queries = [
"Is attention really all you need?",
"What is the amount of bananas farmed in Salvador?",
]
# Process the inputs
batch_images = processor.process_images(images).to(model.device)
batch_queries = processor.process_queries(queries).to(model.device)
# Forward pass
with torch.no_grad():
image_embeddings = model(**batch_images)
query_embeddings = model(**batch_queries)
scores = processor.score_multi_vector(query_embeddings, image_embeddings)
โจ Features
- Based on a novel model architecture and training strategy of Vision Language Models (VLMs), it can efficiently index documents from their visual features.
- It's a SmolVLM extension that generates ColBERT-style multi-vector representations of text and images.
๐ฆ Installation
pip install git+https://github.com/illuin-tech/colpali
๐ป Usage Examples
Basic Usage
import torch
from PIL import Image
from colpali_engine.models import ColIdefics3, ColIdefics3Processor
model = ColIdefics3.from_pretrained(
"vidore/colsmolvlm-v0.1",
torch_dtype=torch.bfloat16,
device_map="cuda:0",
attn_implementation="flash_attention_2" # or eager
).eval()
processor = ColIdefics3Processor.from_pretrained("vidore/colsmolvlm-v0.1")
# Your inputs
images = [
Image.new("RGB", (32, 32), color="white"),
Image.new("RGB", (16, 16), color="black"),
]
queries = [
"Is attention really all you need?",
"What is the amount of bananas farmed in Salvador?",
]
# Process the inputs
batch_images = processor.process_images(images).to(model.device)
batch_queries = processor.process_queries(queries).to(model.device)
# Forward pass
with torch.no_grad():
image_embeddings = model(**batch_images)
query_embeddings = model(**batch_queries)
scores = processor.score_multi_vector(query_embeddings, image_embeddings)
๐ Documentation
Version specificity
This version is trained with colpali-engine==0.3.5
(main branch from the repo). Data is the same as the ColPali data described in the paper.
Model Training
Dataset
Our training dataset of 127,460 query - page pairs is comprised of train sets of openly available academic datasets (63%) and a synthetic dataset made up of pages from web - crawled PDF documents and augmented with VLM - generated (Claude - 3 Sonnet) pseudo - questions (37%). Our training set is fully English by design, enabling us to study zero - shot generalization to non - English languages. We explicitly verify no multi - page PDF document is used both ViDoRe and in the train set to prevent evaluation contamination. A validation set is created with 2% of the samples to tune hyperparameters.
Note: Multilingual data is present in the pretraining corpus of the language model and most probably in the multimodal training.
Parameters
Unless specified otherwise, we train models in bfloat16
format, use low - rank adapters (LoRA) with alpha = 32
and r = 32
on the transformer layers from the language model, as well as the final randomly initialized projection layer, and use a paged_adamw_8bit
optimizer. We train on a 4 GPU setup with data parallelism, a learning rate of 5e - 4 with linear decay with 2.5% warmup steps, and a batch size of 32.
Limitations
- Focus: The model primarily focuses on PDF - type documents and high - resources languages, potentially limiting its generalization to other document types or less represented languages.
- Support: The model relies on multi - vector retreiving derived from the ColBERT late interaction mechanism, which may require engineering efforts to adapt to widely used vector retrieval frameworks that lack native multi - vector support.
๐ง Technical Details
The model is based on a novel model architecture and training strategy of Vision Language Models (VLMs). It generates ColBERT-style multi-vector representations of text and images. The training dataset is a combination of academic datasets and a synthetic dataset. The model is trained with specific parameters such as bfloat16
format, LoRA adapters, and a paged_adamw_8bit
optimizer.
๐ License
ColQwen2's vision language backbone model (Qwen2 - VL) is under apache2.0
license. The adapters attached to the model are under MIT license.
๐ Contact
- Manuel Faysse: manuel.faysse@illuin.tech
- Hugues Sibille: hugues.sibille@illuin.tech
- Tony Wu: tony.wu@illuin.tech
๐ Citation
If you use any datasets or models from this organization in your research, please cite the original dataset as follows:
@misc{faysse2024colpaliefficientdocumentretrieval,
title={ColPali: Efficient Document Retrieval with Vision Language Models},
author={Manuel Faysse and Hugues Sibille and Tony Wu and Bilel Omrani and Gautier Viaud and Cรฉline Hudelot and Pierre Colombo},
year={2024},
eprint={2407.01449},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2407.01449},
}







