đ BEiT (base-sized model, fine-tuned on ImageNet-1k)
BEiT is a model pre-trained in a self-supervised manner on ImageNet-21k (14 million images, 21,841 classes) at a resolution of 224x224. It is then fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at the same resolution. This model was introduced in the paper BEIT: BERT Pre-Training of Image Transformers by Hangbo Bao, Li Dong, and Furu Wei, and was first released in this repository.
Disclaimer: The team that released BEiT did not write a model card for this model. This model card was written by the Hugging Face team.
đ Quick Start
You can use the raw model for image classification. Check out the model hub to find fine-tuned versions for tasks that interest you.
⨠Features
- Self-supervised Pre-training: BEiT is pre-trained on a large collection of images (ImageNet-21k) in a self-supervised fashion, enabling it to learn an inner representation of images.
- Relative Position Embeddings: Unlike the original ViT models, BEiT uses relative position embeddings (similar to T5), which can better capture the spatial relationships between patches.
- Mean-pooling for Classification: BEiT performs image classification by mean-pooling the final hidden states of the patches, rather than using a linear layer on top of the [CLS] token.
đģ Usage Examples
Basic Usage
Here's how to use this model to classify an image from the COCO 2017 dataset into one of the 1,000 ImageNet classes:
from transformers import BeitImageProcessor, BeitForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = BeitImageProcessor.from_pretrained('microsoft/beit-base-patch16-224')
model = BeitForImageClassification.from_pretrained('microsoft/beit-base-patch16-224')
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
Currently, both the feature extractor and model support PyTorch.
đ Documentation
Model description
The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches.
Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Alternatively, one can mean-pool the final hidden states of the patch embeddings, and place a linear layer on top of that.
Training data
The BEiT model was pretrained on ImageNet-21k, a dataset consisting of 14 million images and 21k classes, and fine-tuned on ImageNet, a dataset consisting of 1 million images and 1k classes.
Training procedure
Preprocessing
The exact details of preprocessing of images during training/validation can be found here.
Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5).
Pretraining
For all pre-training related hyperparameters, we refer to page 15 of the original paper.
Evaluation results
For evaluation results on several image classification benchmarks, we refer to tables 1 and 2 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
đ§ Technical Details
- Model Type: Vision Transformer (ViT)
- Training Data: Pretrained on ImageNet-21k and fine-tuned on ImageNet 2012
- Input Format: Images are presented as a sequence of fixed-size patches (16x16)
- Position Embeddings: Relative position embeddings (similar to T5)
- Classification Method: Mean-pooling the final hidden states of the patches
đ License
This model is licensed under the Apache-2.0 license.
BibTeX entry and citation info
@article{DBLP:journals/corr/abs-2106-08254,
author = {Hangbo Bao and
Li Dong and
Furu Wei},
title = {BEiT: {BERT} Pre-Training of Image Transformers},
journal = {CoRR},
volume = {abs/2106.08254},
year = {2021},
url = {https://arxiv.org/abs/2106.08254},
archivePrefix = {arXiv},
eprint = {2106.08254},
timestamp = {Tue, 29 Jun 2021 16:55:04 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-08254.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}