T

Tic CLIP Bestpool Sequential

Developed by apple
TiC-CLIP is a vision-language model trained on the TiC-DataComp-Yearly dataset, employing continual learning strategies to keep the model synchronized with the latest data
Downloads 280
Release Time : 6/5/2024

Model Overview

This model is designed for continual learning in vision-language tasks, trained on temporally continuous data to avoid the high costs of traditional retraining, supporting zero-shot image classification and cross-modal retrieval

Model Features

Continual Learning Strategy
Utilizes experience replay strategy for continual training, reducing computational costs by 2.5x compared to traditional full retraining
Temporal Robustness
Specifically designed to handle temporally varying data, outperforming traditional CLIP models on newer data
Large-scale Training Data
Trained on the TiC-DataComp dataset containing 127 billion timestamped image-text pairs from 2014-2022

Model Capabilities

Zero-shot image classification
Image-text retrieval
Cross-modal representation learning

Use Cases

Computer Vision
Time-sensitive Image Classification
Image classification for concepts and trends that evolve over time
Achieves approximately 8% higher accuracy than traditional CLIP models on 2021-2022 data
Information Retrieval
Cross-temporal Image Retrieval
Retrieve images from different time periods based on text queries
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase