🚀 ViT - SO400M - 14 - SigLIP2模型卡片
ViT - SO400M - 14 - SigLIP2是一個基於WebLI數據集訓練的SigLIP 2視覺 - 語言模型,可用於零樣本圖像分類任務。該模型從原始的JAX檢查點轉換而來,適用於OpenCLIP庫。
🚀 快速開始
環境準備
確保你的環境中安裝了滿足要求的open-clip-torch
(版本 >= 2.31.0)和timm
(版本 >= 1.0.15)。
代碼示例
import torch
import torch.nn.functional as F
from urllib.request import urlopen
from PIL import Image
from open_clip import create_model_from_pretrained, get_tokenizer
model, preprocess = create_model_from_pretrained('hf-hub:timm/ViT-SO400M-14-SigLIP2')
tokenizer = get_tokenizer('hf-hub:timm/ViT-SO400M-14-SigLIP2')
image = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
image = preprocess(image).unsqueeze(0)
labels_list = ["a dog", "a cat", "a donut", "a beignet"]
text = tokenizer(labels_list, context_length=model.context_length)
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image, normalize=True)
text_features = model.encode_text(text, normalize=True)
text_probs = torch.sigmoid(image_features @ text_features.T * model.logit_scale.exp() + model.logit_bias)
zipped_list = list(zip(labels_list, [100 * round(p.item(), 3) for p in text_probs[0]]))
print("Label probabilities: ", zipped_list)
✨ 主要特性
- 模型類型:對比圖像 - 文本模型,可用於零樣本圖像分類。
- 訓練數據:使用WebLI數據集進行訓練。
- 多語言支持:SigLIP 2模型具備多語言視覺 - 語言編碼能力,提升了語義理解、定位和特徵提取能力。
📚 詳細文檔
模型信息
📄 許可證
本模型使用Apache 2.0許可證。
📚 引用
如果你使用了該模型,請引用以下論文:
@article{tschannen2025siglip,
title={SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features},
author={Tschannen, Michael and Gritsenko, Alexey and Wang, Xiao and Naeem, Muhammad Ferjad and Alabdulmohsin, Ibrahim and Parthasarathy, Nikhil and Evans, Talfan and Beyer, Lucas and Xia, Ye and Mustafa, Basil and H'enaff, Olivier and Harmsen, Jeremiah and Steiner, Andreas and Zhai, Xiaohua},
year={2025},
journal={arXiv preprint arXiv:2502.14786}
}
@article{zhai2023sigmoid,
title={Sigmoid loss for language image pre-training},
author={Zhai, Xiaohua and Mustafa, Basil and Kolesnikov, Alexander and Beyer, Lucas},
journal={arXiv preprint arXiv:2303.15343},
year={2023}
}
@misc{big_vision,
author = {Beyer, Lucas and Zhai, Xiaohua and Kolesnikov, Alexander},
title = {Big Vision},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/google-research/big_vision}}
}