🚀 ViT-B-16-SigLIP2-256模型卡片
本模型是一個基於WebLI數據集訓練的SigLIP 2視覺語言模型,可用於零樣本圖像分類任務。它從原始的JAX檢查點轉換而來,適用於OpenCLIP庫。
🚀 快速開始
本模型是在WebLI數據集上訓練的SigLIP 2視覺語言模型。它已從Big Vision中的原始JAX檢查點轉換為可在OpenCLIP中使用的模型。
✨ 主要特性
- 模型類型:對比圖像文本、零樣本圖像分類。
- 原始倉庫:https://github.com/google-research/big_vision
- 數據集:WebLI
- 相關論文:
- SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features: https://arxiv.org/abs/2502.14786
- Sigmoid loss for language image pre-training: https://arxiv.org/abs/2303.15343
📦 安裝指南
文檔未提及安裝步驟,跳過此章節。
💻 使用示例
基礎用法
import torch
import torch.nn.functional as F
from urllib.request import urlopen
from PIL import Image
from open_clip import create_model_from_pretrained, get_tokenizer
model, preprocess = create_model_from_pretrained('hf-hub:timm/ViT-B-16-SigLIP2-256')
tokenizer = get_tokenizer('hf-hub:timm/ViT-B-16-SigLIP2-256')
image = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
image = preprocess(image).unsqueeze(0)
labels_list = ["a dog", "a cat", "a donut", "a beignet"]
text = tokenizer(labels_list, context_length=model.context_length)
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image, normalize=True)
text_features = model.encode_text(text, normalize=True)
text_probs = torch.sigmoid(image_features @ text_features.T * model.logit_scale.exp() + model.logit_bias)
zipped_list = list(zip(labels_list, [100 * round(p.item(), 3) for p in text_probs[0]]))
print("Label probabilities: ", zipped_list)
📚 詳細文檔
文檔未提供詳細說明,跳過此章節。
🔧 技術細節
文檔未提供技術實現細節,跳過此章節。
📄 許可證
本模型使用的許可證為apache-2.0
。
📚 引用
@article{tschannen2025siglip,
title={SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features},
author={Tschannen, Michael and Gritsenko, Alexey and Wang, Xiao and Naeem, Muhammad Ferjad and Alabdulmohsin, Ibrahim and Parthasarathy, Nikhil and Evans, Talfan and Beyer, Lucas and Xia, Ye and Mustafa, Basil and H'enaff, Olivier and Harmsen, Jeremiah and Steiner, Andreas and Zhai, Xiaohua},
year={2025},
journal={arXiv preprint arXiv:2502.14786}
}
@article{zhai2023sigmoid,
title={Sigmoid loss for language image pre-training},
author={Zhai, Xiaohua and Mustafa, Basil and Kolesnikov, Alexander and Beyer, Lucas},
journal={arXiv preprint arXiv:2303.15343},
year={2023}
}
@misc{big_vision,
author = {Beyer, Lucas and Zhai, Xiaohua and Kolesnikov, Alexander},
title = {Big Vision},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/google-research/big_vision}}
}