đ ViT-L-16-SigLIP2-384 Model Card
This is a SigLIP 2 Vision-Language model trained on WebLI. It has been converted for use in OpenCLIP from the original JAX checkpoints in Big Vision, offering capabilities in zero-shot image classification.
đ Quick Start
The model can be used for zero-shot image classification tasks. Below is a code example demonstrating how to use it:
import torch
import torch.nn.functional as F
from urllib.request import urlopen
from PIL import Image
from open_clip import create_model_from_pretrained, get_tokenizer
model, preprocess = create_model_from_pretrained('hf-hub:timm/ViT-L-16-SigLIP2-384')
tokenizer = get_tokenizer('hf-hub:timm/ViT-L-16-SigLIP2-384')
image = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
image = preprocess(image).unsqueeze(0)
labels_list = ["a dog", "a cat", "a donut", "a beignet"]
text = tokenizer(labels_list, context_length=model.context_length)
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image, normalize=True)
text_features = model.encode_text(text, normalize=True)
text_probs = torch.sigmoid(image_features @ text_features.T * model.logit_scale.exp() + model.logit_bias)
zipped_list = list(zip(labels_list, [100 * round(p.item(), 3) for p in text_probs[0]]))
print("Label probabilities: ", zipped_list)
⨠Features
- Contrastive Image-Text: The model uses a contrastive approach to learn the relationship between images and text.
- Zero-Shot Image Classification: It can classify images into categories without explicit training on those specific categories.
đĻ Installation
The code example requires open-clip-torch >= 2.31.0
and timm >= 1.0.15
. You can install them using the following commands:
pip install open-clip-torch>=2.31.0
pip install timm>=1.0.15
đģ Usage Examples
Basic Usage
The provided code example above is a basic demonstration of using the model for zero-shot image classification. It loads an image from a URL, encodes the image and text features, and calculates the probabilities of different labels for the image.
đ Documentation
Model Details
đ License
This model is licensed under the Apache-2.0 license.
đ Citation
@article{tschannen2025siglip,
title={SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features},
author={Tschannen, Michael and Gritsenko, Alexey and Wang, Xiao and Naeem, Muhammad Ferjad and Alabdulmohsin, Ibrahim and Parthasarathy, Nikhil and Evans, Talfan and Beyer, Lucas and Xia, Ye and Mustafa, Basil and H'enaff, Olivier and Harmsen, Jeremiah and Steiner, Andreas and Zhai, Xiaohua},
year={2025},
journal={arXiv preprint arXiv:2502.14786}
}
@article{zhai2023sigmoid,
title={Sigmoid loss for language image pre-training},
author={Zhai, Xiaohua and Mustafa, Basil and Kolesnikov, Alexander and Beyer, Lucas},
journal={arXiv preprint arXiv:2303.15343},
year={2023}
}
@misc{big_vision,
author = {Beyer, Lucas and Zhai, Xiaohua and Kolesnikov, Alexander},
title = {Big Vision},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/google-research/big_vision}}
}