🚀 Vision Transformer (small-sized model) trained using DINOv2
A Vision Transformer (ViT) model trained with the DINOv2 method, offering robust visual feature extraction.
🚀 Quick Start
The Vision Transformer (ViT) is a transformer encoder model (similar to BERT) that has been pretrained in a self - supervised manner on a large set of images. This model was introduced in the paper DINOv2: Learning Robust Visual Features without Supervision by Oquab et al. and first released in this repository.
Disclaimer: The team releasing DINOv2 did not write a model card for this model, so this model card has been written by the Hugging Face team.
✨ Features
- Self - supervised pre - training on a large image collection.
- Ability to learn inner representations of images for downstream tasks.
- No fine - tuned heads included, providing flexibility for different applications.
📚 Documentation
Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT - like) pretrained on a large collection of images in a self - supervised fashion. Images are presented to the model as a sequence of fixed - size patches, which are linearly embedded. A [CLS] token is added to the beginning of a sequence for classification tasks, and absolute position embeddings are added before feeding the sequence to the layers of the Transformer encoder. Note that this model does not include any fine - tuned heads.
By pre - training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks. For example, if you have a dataset of labeled images, you can train a standard classifier by placing a linear layer on top of the pre - trained encoder. Typically, a linear layer is placed on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
Intended uses & limitations
You can use the raw model for feature extraction. Check the model hub to find fine - tuned versions for tasks that interest you.
💻 Usage Examples
Basic Usage
from transformers import AutoImageProcessor, AutoModel
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained('facebook/dinov2-small')
model = AutoModel.from_pretrained('facebook/dinov2-small')
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
BibTeX entry and citation info
misc{oquab2023dinov2,
title={DINOv2: Learning Robust Visual Features without Supervision},
author={Maxime Oquab and Timothée Darcet and Théo Moutakanni and Huy Vo and Marc Szafraniec and Vasil Khalidov and Pierre Fernandez and Daniel Haziza and Francisco Massa and Alaaeldin El-Nouby and Mahmoud Assran and Nicolas Ballas and Wojciech Galuba and Russell Howes and Po-Yao Huang and Shang-Wen Li and Ishan Misra and Michael Rabbat and Vasu Sharma and Gabriel Synnaeve and Hu Xu and Hervé Jegou and Julien Mairal and Patrick Labatut and Armand Joulin and Piotr Bojanowski},
year={2023},
eprint={2304.07193},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
📄 License
This model is licensed under the Apache 2.0 license.