đ ViT-L-16-SigLIP2-512 Model Card
A SigLIP 2 Vision-Language model trained on WebLI, converted for OpenCLIP.
đ Quick Start
The ViT-L-16-SigLIP2-512 is a SigLIP 2 Vision-Language model trained on the WebLI dataset. It has been converted from the original JAX checkpoints in Big Vision for use in OpenCLIP.
⨠Features
- Contrastive Image-Text, Zero-Shot Image Classification: This model is designed for zero-shot image classification tasks, leveraging contrastive learning between images and text.
- Multilingual Vision-Language Encoders: SigLIP 2 offers improved semantic understanding, localization, and dense features, as described in the related papers.
đĻ Installation
No specific installation steps are provided in the original README. However, the code example uses open-clip
which requires open-clip-torch >= 2.31.0
and timm >= 1.0.15
. You can install them via pip
:
pip install open-clip-torch timm
đģ Usage Examples
Basic Usage
import torch
import torch.nn.functional as F
from urllib.request import urlopen
from PIL import Image
from open_clip import create_model_from_pretrained, get_tokenizer
model, preprocess = create_model_from_pretrained('hf-hub:timm/ViT-L-16-SigLIP2-512')
tokenizer = get_tokenizer('hf-hub:timm/ViT-L-16-SigLIP2-512')
image = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
image = preprocess(image).unsqueeze(0)
labels_list = ["a dog", "a cat", "a donut", "a beignet"]
text = tokenizer(labels_list, context_length=model.context_length)
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image, normalize=True)
text_features = model.encode_text(text, normalize=True)
text_probs = torch.sigmoid(image_features @ text_features.T * model.logit_scale.exp() + model.logit_bias)
zipped_list = list(zip(labels_list, [100 * round(p.item(), 3) for p in text_probs[0]]))
print("Label probabilities: ", zipped_list)
đ Documentation
Model Details
đ License
This model is licensed under the Apache-2.0 license.
đ Citations
@article{tschannen2025siglip,
title={SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features},
author={Tschannen, Michael and Gritsenko, Alexey and Wang, Xiao and Naeem, Muhammad Ferjad and Alabdulmohsin, Ibrahim and Parthasarathy, Nikhil and Evans, Talfan and Beyer, Lucas and Xia, Ye and Mustafa, Basil and H'enaff, Olivier and Harmsen, Jeremiah and Steiner, Andreas and Zhai, Xiaohua},
year={2025},
journal={arXiv preprint arXiv:2502.14786}
}
@article{zhai2023sigmoid,
title={Sigmoid loss for language image pre-training},
author={Zhai, Xiaohua and Mustafa, Basil and Kolesnikov, Alexander and Beyer, Lucas},
journal={arXiv preprint arXiv:2303.15343},
year={2023}
}
@misc{big_vision,
author = {Beyer, Lucas and Zhai, Xiaohua and Kolesnikov, Alexander},
title = {Big Vision},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/google-research/big_vision}}
}