đ BEiT (base-sized model, fine-tuned on ImageNet-1k)
BEiT is a model pre-trained in a self-supervised manner on ImageNet - 21k (14 million images, 21,841 classes) at 224x224 resolution and fine - tuned on ImageNet 2012 (1 million images, 1,000 classes) at 384x384 resolution. It offers an effective solution for image classification tasks by leveraging pre - trained image representations.
đ Quick Start
You can use the raw model for image classification. Check out the model hub to find fine - tuned versions for your specific task.
⨠Features
- Self - supervised Pre - training: BEiT is pre - trained on a large ImageNet - 21k dataset in a self - supervised way, enabling it to learn rich image representations.
- Relative Position Embeddings: Unlike original ViT models, BEiT uses relative position embeddings, which can better capture the relationship between image patches.
- Flexible Feature Extraction: It can be used to extract features for downstream tasks, such as training a classifier on a labeled image dataset.
đĻ Installation
No specific installation steps are provided in the original README.
đģ Usage Examples
Basic Usage
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
from transformers import BeitFeatureExtractor, BeitForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = BeitFeatureExtractor.from_pretrained('microsoft/beit-base-patch16-384')
model = BeitForImageClassification.from_pretrained('microsoft/beit-base-patch16-384')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
Currently, both the feature extractor and model support PyTorch.
đ Documentation
Model description
The BEiT model is a Vision Transformer (ViT), a transformer encoder model (BERT - like). It is pre - trained on ImageNet - 21k at 224x224 pixels in a self - supervised fashion. The pre - training goal is to predict visual tokens from OpenAI's DALL - E's VQ - VAE encoder based on masked patches.
Then, it is fine - tuned on ImageNet (ILSVRC2012) at 224x224 resolution. Images are presented as a sequence of 16x16 patches, which are linearly embedded. BEiT uses relative position embeddings and classifies images by mean - pooling the final hidden states of patches.
Intended uses & limitations
You can use the raw model for image classification. Look for fine - tuned versions on the model hub for specific tasks.
Training data
The BEiT model was pretrained on ImageNet - 21k, a dataset with 14 million images and 21k classes, and fine - tuned on ImageNet, a dataset with 1 million images and 1k classes.
Training procedure
Preprocessing
The exact preprocessing details during training/validation can be found here. Images are resized/rescaled to 224x224 and normalized with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5) across RGB channels.
Pretraining
For all pre - training related hyperparameters, refer to page 15 of the original paper.
Evaluation results
For evaluation results on several image classification benchmarks, refer to tables 1 and 2 of the original paper. Note that higher resolution (384x384) and larger model sizes generally lead to better performance.
đ§ Technical Details
The BEiT model is based on the Vision Transformer architecture. It innovatively uses self - supervised pre - training on ImageNet - 21k to learn image representations. By predicting visual tokens from masked patches, it can capture rich semantic information in images. The use of relative position embeddings helps the model better understand the spatial relationship between image patches. During fine - tuning on ImageNet, it further adapts to specific classification tasks.
đ License
This project is licensed under the Apache - 2.0 license.
BibTeX entry and citation info
author = {Hangbo Bao and
Li Dong and
Furu Wei},
title = {BEiT: {BERT} Pre-Training of Image Transformers},
journal = {CoRR},
volume = {abs/2106.08254},
year = {2021},
url = {https://arxiv.org/abs/2106.08254},
archivePrefix = {arXiv},
eprint = {2106.08254},
timestamp = {Tue, 29 Jun 2021 16:55:04 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-08254.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
đĻ Information Table
Property |
Details |
Model Type |
Vision Transformer (ViT) |
Training Data |
Pretrained on ImageNet - 21k (14 million images, 21,841 classes), fine - tuned on ImageNet 2012 (1 million images, 1,000 classes) |