đ Swin Transformer v2 (large-sized model)
The Swin Transformer v2 model is pre-trained on ImageNet - 21k and fine - tuned on ImageNet - 1k at a resolution of 384x384. It offers a powerful solution for image classification and related tasks.
đ Quick Start
The Swin Transformer v2 model can be used for image classification. You can find fine - tuned versions on the model hub according to your needs.
⨠Features
- Hierarchical Feature Maps: The Swin Transformer builds hierarchical feature maps by merging image patches in deeper layers, which is beneficial for both image classification and dense recognition tasks.
- Linear Computation Complexity: It has linear computation complexity to the input image size as self - attention is computed only within each local window.
- Swin Transformer v2 Improvements:
- A residual - post - norm method combined with cosine attention to enhance training stability.
- A log - spaced continuous position bias method to transfer low - resolution pre - trained models to high - resolution downstream tasks effectively.
- A self - supervised pre - training method, SimMIM, to reduce the need for a large number of labeled images.
đ Documentation
Model description
The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self - attention only within each local window (shown in red). It can thus serve as a general - purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self - attention globally.
Swin Transformer v2 adds 3 main improvements: 1) a residual - post - norm method combined with cosine attention to improve training stability; 2) a log - spaced continuous position bias method to effectively transfer models pre - trained using low - resolution images to downstream tasks with high - resolution inputs; 3) a self - supervised pre - training method, SimMIM, to reduce the needs of vast labeled images.

Source
Intended uses & limitations
You can use the raw model for image classification. See the model hub to look for fine - tuned versions on a task that interests you.
đģ Usage Examples
Basic Usage
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
from transformers import AutoImageProcessor, AutoModelForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained("microsoft/swinv2-large-patch4-window12to24-192to384-22kto1k-ft")
model = AutoModelForImageClassification.from_pretrained("microsoft/swinv2-large-patch4-window12to24-192to384-22kto1k-ft")
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
For more code examples, we refer to the documentation.
BibTeX entry and citation info
@article{DBLP:journals/corr/abs-2111-09883,
author = {Ze Liu and
Han Hu and
Yutong Lin and
Zhuliang Yao and
Zhenda Xie and
Yixuan Wei and
Jia Ning and
Yue Cao and
Zheng Zhang and
Li Dong and
Furu Wei and
Baining Guo},
title = {Swin Transformer {V2:} Scaling Up Capacity and Resolution},
journal = {CoRR},
volume = {abs/2111.09883},
year = {2021},
url = {https://arxiv.org/abs/2111.09883},
eprinttype = {arXiv},
eprint = {2111.09883},
timestamp = {Thu, 02 Dec 2021 15:54:22 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2111-09883.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
đ License
This model is licensed under the Apache - 2.0 license.
Property |
Details |
Model Type |
Vision Transformer for image classification |
Training Data |
ImageNet - 21k for pre - training, ImageNet - 1k for fine - tuning |
â ī¸ Important Note
The team releasing Swin Transformer v2 did not write a model card for this model so this model card has been written by the Hugging Face team.