# Language-free supervision
Webssl Dino1b Full2b 224
This is a 1-billion-parameter vision Transformer model trained on 2 billion web images through DINOv2 self-supervised learning, capable of learning visual representations without language supervision.
Image Classification
Transformers

W
facebook
1,172
1
Webssl Dino3b Full2b 224
This is a 3-billion parameter vision Transformer model trained on 2 billion web images through DINOv2 self-supervised learning, capable of learning powerful visual representations without language supervision.
Image Classification
Transformers

W
facebook
72
0
Webssl Dino300m Full2b 224
A 224-resolution Vision Transformer model based on 2 billion MetaCLIP data, trained using DINOv2 self-supervised learning method
Image Classification
Transformers

W
facebook
503
7
Featured Recommended AI Models