🚀 CLIP ViT-B-32 256x256 trained DataComp-1B
This is a CLIP ViT-B/32 model trained with DataComp-1B, aiming to facilitate zero - shot image classification research and related interdisciplinary studies.
🚀 Quick Start
To get started with this model, refer to How to Get Started with the Model.
✨ Features
- Zero - shot Image Classification: Capable of performing zero - shot image classification tasks.
- Versatile Applications: Can be used for image and text retrieval, image classification fine - tuning, linear probe image classification, image generation guiding and conditioning, etc.
📦 Installation
No installation steps are provided in the original document, so this section is skipped.
💻 Usage Examples
No code examples are provided in the original document, so this section is skipped.
📚 Documentation
Model Details
Model Description
A CLIP ViT - B/32 model trained with the DataComp - 1B (https://github.com/mlfoundations/datacomp) using OpenCLIP (https://github.com/mlfoundations/open_clip) at 256x256 resolution. Model training was done on the [JURECA](https://www.fz - juelich.de/en/ias/jsc/systems/supercomputers/jureca) cluster.
Uses
Intended Use
As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model - card.md), this model is intended as a research output for research communities. It aims to help researchers better understand and explore zero - shot, arbitrary image classification and can be used for interdisciplinary studies of the potential impact of such models.
Direct Use
Zero - shot image classification, image and text retrieval, among others.
Downstream Use
Image classification and other image task fine - tuning, linear probe image classification, image generation guiding and conditioning, among others.
Out - of - Scope Use
As per the OpenAI models, any deployed use case of the model - whether commercial or not - is currently out of scope. Non - deployed use cases such as image search in a constrained environment are also not recommended unless there is thorough in - domain testing of the model with a specific, fixed class taxonomy. Certain use cases in the domain of surveillance and facial recognition are always out - of - scope regardless of the model's performance.
Training Details
Training Data
This model was trained with 1.4 billion samples of the DataComp - 1B dataset (https://arxiv.org/abs/2304.14108).
⚠️ Important Note
The dataset is uncurated, and the collected links may lead to strongly discomforting and disturbing content. It is recommended to use the dataset for research purposes. Although a “safe” subset can be extracted by filtering based on safety tags, the possibility of harmful content still exists. Do not use it for creating ready - to - go industrial products.
SLURM script
#!/bin/bash -x
source /path/miniconda/bin/activate
export CUDA_VISIBLE_DEVICES=0,1,2,3
export MASTER_PORT=12802
master_addr=$(scontrol show hostnames "$SLURM_JOB_NODELIST" | head -n 1)
export MASTER_ADDR=$master_addr"i"
echo "MASTER_ADDR="$MASTER_ADDR
srun --cpu-bind=v --cpus-per-task=12 python -u -m training.main --aug-cfg scale='(0.4, 1.0)' color_jitter='(0.32, 0.32, 0.32, 0.08)' color_jitter_prob=0.8 gray_scale_prob=0.2 use_timm=True \
--save-frequency 1 \
--zeroshot-frequency 1 \
--dataset-type webdataset \
--train-data '/path/to/data' \
--report-to tensorboard \
--train-num-samples 1398270000 \
--warmup 2000 \
--batch-size 896 \
--epochs 24 \
--workers 8 \
--model ViT-B-32-256 \
--logs logs \
--seed 0 \
--ddp-static-graph \
--local-loss \
--gather-with-grad \
--lr 0.001 \
--log-every-n-steps 20 \
--save-most-recent \
--resume latest \
--grad-checkpointing \
--precision amp_bfloat16 \
--beta1 0.9 \
--beta2 0.95 \
--wd 0.2
Evaluation
Testing Data, Factors & Metrics
The testing was performed on a suite of 38 datasets. See the paper (https://arxiv.org/abs/2304.14108) for more details.
Results
The model achieves a 72.7% zero - shot top - 1 accuracy on ImageNet - 1k, 64.4% image retrieval recall@5 and 80.7% text retrieval recall@5 on COCO captions.
Acknowledgements
No specific content is provided in the original document, so this section is skipped.
Citation
BibTeX
DataComp
@article{datacomp,
title={DataComp: In search of the next generation of multimodal datasets},
author={Samir Yitzhak Gadre, Gabriel Ilharco, Alex Fang, Jonathan Hayase, Georgios Smyrnis, Thao Nguyen, Ryan Marten, Mitchell Wortsman, Dhruba Ghosh, Jieyu Zhang, Eyal Orgad, Rahim Entezari, Giannis Daras, Sarah Pratt, Vivek Ramanujan, Yonatan Bitton, Kalyani Marathe, Stephen Mussmann, Richard Vencu, Mehdi Cherti, Ranjay Krishna, Pang Wei Koh, Olga Saukh, Alexander Ratner, Shuran Song, Hannaneh Hajishirzi, Ali Farhadi, Romain Beaumont, Sewoong Oh, Alex Dimakis, Jenia Jitsev, Yair Carmon, Vaishaal Shankar, Ludwig Schmidt},
journal={arXiv preprint arXiv:2304.14108},
year={2023}
}
OpenAI CLIP paper
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
OpenCLIP software
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
🔧 Technical Details
Training Data
This model was trained with 1.4 billion samples of the DataComp - 1B dataset. The motivation behind dataset creation is to democratize research and experimentation around large - scale multi - modal model training. However, the dataset is uncurated, and the collected links may lead to discomforting content. A “safe” subset can be extracted by filtering based on safety tags, but the possibility of harmful content still exists.
SLURM Script
The provided SLURM script details the training configuration, including node settings, GPU usage, training parameters such as batch size, learning rate, etc.
📄 License
The model is released under the MIT license.