🚀 ViT - SO400M - 14 - SigLIP2模型卡片
ViT - SO400M - 14 - SigLIP2是一个基于WebLI数据集训练的SigLIP 2视觉 - 语言模型,可用于零样本图像分类任务。该模型从原始的JAX检查点转换而来,适用于OpenCLIP库。
🚀 快速开始
环境准备
确保你的环境中安装了满足要求的open-clip-torch
(版本 >= 2.31.0)和timm
(版本 >= 1.0.15)。
代码示例
import torch
import torch.nn.functional as F
from urllib.request import urlopen
from PIL import Image
from open_clip import create_model_from_pretrained, get_tokenizer
model, preprocess = create_model_from_pretrained('hf-hub:timm/ViT-SO400M-14-SigLIP2')
tokenizer = get_tokenizer('hf-hub:timm/ViT-SO400M-14-SigLIP2')
image = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
image = preprocess(image).unsqueeze(0)
labels_list = ["a dog", "a cat", "a donut", "a beignet"]
text = tokenizer(labels_list, context_length=model.context_length)
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image, normalize=True)
text_features = model.encode_text(text, normalize=True)
text_probs = torch.sigmoid(image_features @ text_features.T * model.logit_scale.exp() + model.logit_bias)
zipped_list = list(zip(labels_list, [100 * round(p.item(), 3) for p in text_probs[0]]))
print("Label probabilities: ", zipped_list)
✨ 主要特性
- 模型类型:对比图像 - 文本模型,可用于零样本图像分类。
- 训练数据:使用WebLI数据集进行训练。
- 多语言支持:SigLIP 2模型具备多语言视觉 - 语言编码能力,提升了语义理解、定位和特征提取能力。
📚 详细文档
模型信息
📄 许可证
本模型使用Apache 2.0许可证。
📚 引用
如果你使用了该模型,请引用以下论文:
@article{tschannen2025siglip,
title={SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features},
author={Tschannen, Michael and Gritsenko, Alexey and Wang, Xiao and Naeem, Muhammad Ferjad and Alabdulmohsin, Ibrahim and Parthasarathy, Nikhil and Evans, Talfan and Beyer, Lucas and Xia, Ye and Mustafa, Basil and H'enaff, Olivier and Harmsen, Jeremiah and Steiner, Andreas and Zhai, Xiaohua},
year={2025},
journal={arXiv preprint arXiv:2502.14786}
}
@article{zhai2023sigmoid,
title={Sigmoid loss for language image pre-training},
author={Zhai, Xiaohua and Mustafa, Basil and Kolesnikov, Alexander and Beyer, Lucas},
journal={arXiv preprint arXiv:2303.15343},
year={2023}
}
@misc{big_vision,
author = {Beyer, Lucas and Zhai, Xiaohua and Kolesnikov, Alexander},
title = {Big Vision},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/google-research/big_vision}}
}