V

Vit L 16 SigLIP 256

Developed by timm
SigLIP (Sigmoid Loss for Language-Image Pre-training) model trained on the WebLI dataset for zero-shot image classification tasks.
Downloads 1,516
Release Time : 10/16/2023

Model Overview

This model is a contrastive image-text model pre-trained using the Sigmoid loss function, supporting zero-shot image classification tasks.

Model Features

Sigmoid loss function
Uses the Sigmoid loss function for language-image pre-training, improving the model's contrastive learning performance.
Zero-shot classification
Supports zero-shot image classification, applicable to new categories without task-specific fine-tuning.
Multi-framework support
Supports both OpenCLIP (image + text) and timm (image only) frameworks, offering flexible usage.

Model Capabilities

Image feature extraction
Text feature extraction
Zero-shot image classification
Image-text contrastive learning

Use Cases

Image classification
Zero-shot image classification
Classify new image categories without fine-tuning.
Image retrieval
Text-based image retrieval
Retrieve relevant images based on text descriptions.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase