🚀 ViT - B - 16 - SigLIP2 - 384模型卡片
本項目是一個基於WebLI訓練的SigLIP 2視覺 - 語言模型,可用於零樣本圖像分類任務。該模型從原始的JAX檢查點轉換而來,適用於OpenCLIP。
🚀 快速開始
環境要求
此代碼示例需要 open-clip-torch >= 2.31.0
和 timm >= 1.0.15
。
代碼示例
import torch
import torch.nn.functional as F
from urllib.request import urlopen
from PIL import Image
from open_clip import create_model_from_pretrained, get_tokenizer
model, preprocess = create_model_from_pretrained('hf-hub:timm/ViT-B-16-SigLIP2-384')
tokenizer = get_tokenizer('hf-hub:timm/ViT-B-16-SigLIP2-384')
image = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
image = preprocess(image).unsqueeze(0)
labels_list = ["a dog", "a cat", "a donut", "a beignet"]
text = tokenizer(labels_list, context_length=model.context_length)
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image, normalize=True)
text_features = model.encode_text(text, normalize=True)
text_probs = torch.sigmoid(image_features @ text_features.T * model.logit_scale.exp() + model.logit_bias)
zipped_list = list(zip(labels_list, [100 * round(p.item(), 3) for p in text_probs[0]]))
print("Label probabilities: ", zipped_list)
✨ 主要特性
- 基於SigLIP 2架構,是一個視覺 - 語言模型。
- 可用於零樣本圖像分類任務。
- 模型在WebLI數據集上進行訓練。
📦 安裝指南
本README未提及安裝步驟,可參考 open-clip-torch
和 timm
的官方文檔進行安裝。
💻 使用示例
基礎用法
import torch
import torch.nn.functional as F
from urllib.request import urlopen
from PIL import Image
from open_clip import create_model_from_pretrained, get_tokenizer
model, preprocess = create_model_from_pretrained('hf-hub:timm/ViT-B-16-SigLIP2-384')
tokenizer = get_tokenizer('hf-hub:timm/ViT-B-16-SigLIP2-384')
image = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
image = preprocess(image).unsqueeze(0)
labels_list = ["a dog", "a cat", "a donut", "a beignet"]
text = tokenizer(labels_list, context_length=model.context_length)
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image, normalize=True)
text_features = model.encode_text(text, normalize=True)
text_probs = torch.sigmoid(image_features @ text_features.T * model.logit_scale.exp() + model.logit_bias)
zipped_list = list(zip(labels_list, [100 * round(p.item(), 3) for p in text_probs[0]]))
print("Label probabilities: ", zipped_list)
高級用法
本README未提及高級用法示例。
📚 詳細文檔
模型詳情
這是一個在WebLI上訓練的SigLIP 2視覺 - 語言模型。該模型已從 Big Vision 中的原始JAX檢查點轉換為可在OpenCLIP中使用的形式。
模型信息表格
🔧 技術細節
本README未提供詳細的技術實現細節。
📄 許可證
本模型使用 apache - 2.0
許可證。
📚 引用信息
@article{tschannen2025siglip,
title={SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features},
author={Tschannen, Michael and Gritsenko, Alexey and Wang, Xiao and Naeem, Muhammad Ferjad and Alabdulmohsin, Ibrahim and Parthasarathy, Nikhil and Evans, Talfan and Beyer, Lucas and Xia, Ye and Mustafa, Basil and H'enaff, Olivier and Harmsen, Jeremiah and Steiner, Andreas and Zhai, Xiaohua},
year={2025},
journal={arXiv preprint arXiv:2502.14786}
}
@article{zhai2023sigmoid,
title={Sigmoid loss for language image pre-training},
author={Zhai, Xiaohua and Mustafa, Basil and Kolesnikov, Alexander and Beyer, Lucas},
journal={arXiv preprint arXiv:2303.15343},
year={2023}
}
@misc{big_vision,
author = {Beyer, Lucas and Zhai, Xiaohua and Kolesnikov, Alexander},
title = {Big Vision},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/google-research/big_vision}}
}