đ ViT-L-16-SigLIP2-256 Model Card
A SigLIP 2 Vision-Language model trained on WebLI for zero-shot image classification.
đ Quick Start
This SigLIP 2 Vision-Language model is trained on WebLI. It has been converted from the original JAX checkpoints in Big Vision for use in OpenCLIP.
⨠Features
- Model Type: Contrastive Image-Text, Zero-Shot Image Classification.
- Original: https://github.com/google-research/big_vision
- Dataset: WebLI
- Papers:
đĻ Installation
No specific installation steps are provided in the original README.
đģ Usage Examples
Basic Usage
import torch
import torch.nn.functional as F
from urllib.request import urlopen
from PIL import Image
from open_clip import create_model_from_pretrained, get_tokenizer
model, preprocess = create_model_from_pretrained('hf-hub:timm/ViT-L-16-SigLIP2-256')
tokenizer = get_tokenizer('hf-hub:timm/ViT-L-16-SigLIP2-256')
image = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
image = preprocess(image).unsqueeze(0)
labels_list = ["a dog", "a cat", "a donut", "a beignet"]
text = tokenizer(labels_list, context_length=model.context_length)
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image, normalize=True)
text_features = model.encode_text(text, normalize=True)
text_probs = torch.sigmoid(image_features @ text_features.T * model.logit_scale.exp() + model.logit_bias)
zipped_list = list(zip(labels_list, [100 * round(p.item(), 3) for p in text_probs[0]]))
print("Label probabilities: ", zipped_list)
đ Documentation
The model is a SigLIP 2 Vision-Language model trained on WebLI. It can be used for zero-shot image classification tasks. The code example above demonstrates how to use the model to classify an image into different categories.
đ License
This model is licensed under the Apache-2.0 license.
đ Citations
@article{tschannen2025siglip,
title={SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features},
author={Tschannen, Michael and Gritsenko, Alexey and Wang, Xiao and Naeem, Muhammad Ferjad and Alabdulmohsin, Ibrahim and Parthasarathy, Nikhil and Evans, Talfan and Beyer, Lucas and Xia, Ye and Mustafa, Basil and H'enaff, Olivier and Harmsen, Jeremiah and Steiner, Andreas and Zhai, Xiaohua},
year={2025},
journal={arXiv preprint arXiv:2502.14786}
}
@article{zhai2023sigmoid,
title={Sigmoid loss for language image pre-training},
author={Zhai, Xiaohua and Mustafa, Basil and Kolesnikov, Alexander and Beyer, Lucas},
journal={arXiv preprint arXiv:2303.15343},
year={2023}
}
@misc{big_vision,
author = {Beyer, Lucas and Zhai, Xiaohua and Kolesnikov, Alexander},
title = {Big Vision},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/google-research/big_vision}}
}