đ ViT-gopt-16-SigLIP2-256 Model Card
A SigLIP 2 Vision-Language model trained on WebLI for zero-shot image classification.
đ Quick Start
This SigLIP 2 Vision-Language model is trained on the WebLI dataset. It has been converted from the original JAX checkpoints in Big Vision for use in OpenCLIP.
⨠Features
- Model Type: Contrastive Image-Text, Zero-Shot Image Classification.
- Original: Big Vision Repository
- Dataset: WebLI
- Papers:
Property |
Details |
Model Type |
Contrastive Image-Text, Zero-Shot Image Classification |
Training Data |
WebLI |
đĻ Installation
This model requires open-clip-torch >= 2.31.0
and timm >= 1.0.15
.
đģ Usage Examples
Basic Usage
import torch
import torch.nn.functional as F
from urllib.request import urlopen
from PIL import Image
from open_clip import create_model_from_pretrained, get_tokenizer
model, preprocess = create_model_from_pretrained('hf-hub:timm/ViT-gopt-16-SigLIP2-256')
tokenizer = get_tokenizer('hf-hub:timm/ViT-gopt-16-SigLIP2-256')
image = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
image = preprocess(image).unsqueeze(0)
labels_list = ["a dog", "a cat", "a donut", "a beignet"]
text = tokenizer(labels_list, context_length=model.context_length)
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image, normalize=True)
text_features = model.encode_text(text, normalize=True)
text_probs = torch.sigmoid(image_features @ text_features.T * model.logit_scale.exp() + model.logit_bias)
zipped_list = list(zip(labels_list, [100 * round(p.item(), 3) for p in text_probs[0]]))
print("Label probabilities: ", zipped_list)
đ Documentation
The above code demonstrates how to use the ViT-gopt-16-SigLIP2-256
model for zero-shot image classification. It loads the model and preprocessor from the Hugging Face Hub, processes an image, tokenizes text labels, and computes the probabilities of the image belonging to each label.
đ License
This model is licensed under the Apache 2.0 License.
đ Citation
@article{tschannen2025siglip,
title={SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features},
author={Tschannen, Michael and Gritsenko, Alexey and Wang, Xiao and Naeem, Muhammad Ferjad and Alabdulmohsin, Ibrahim and Parthasarathy, Nikhil and Evans, Talfan and Beyer, Lucas and Xia, Ye and Mustafa, Basil and H'enaff, Olivier and Harmsen, Jeremiah and Steiner, Andreas and Zhai, Xiaohua},
year={2025},
journal={arXiv preprint arXiv:2502.14786}
}
@article{zhai2023sigmoid,
title={Sigmoid loss for language image pre-training},
author={Zhai, Xiaohua and Mustafa, Basil and Kolesnikov, Alexander and Beyer, Lucas},
journal={arXiv preprint arXiv:2303.15343},
year={2023}
}
@misc{big_vision,
author = {Beyer, Lucas and Zhai, Xiaohua and Kolesnikov, Alexander},
title = {Big Vision},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/google-research/big_vision}}
}