đ Hiera Model (Tiny, fine-tuned on IN1K)
Hiera is a hierarchical vision transformer. It's fast, powerful, and most importantly, simple. Introduced in the paper Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles, it outperforms the state - of - the - art in a wide range of image and video tasks while being much faster.
đ Quick Start
This section provides a high - level overview of the Hiera model and its capabilities.
⨠Features
How does it work?

Vision transformers like ViT maintain the same spatial resolution and number of features throughout the network. However, this is inefficient as early layers don't need many features and later layers don't require high spatial resolution. Prior hierarchical models like ResNet addressed this by using fewer features at the start and less spatial resolution at the end.
Several domain - specific vision transformers, such as Swin or MViT, have adopted a hierarchical design. But in the quest for state - of - the - art results on ImageNet - 1K with fully supervised training, these models have become increasingly complex as they add specialized modules to compensate for the spatial biases that ViTs lack. Although these changes lead to effective models with good FLOP counts, the added complexity actually makes these models slower overall.
We demonstrate that much of this complexity is unnecessary. Instead of manually adding spatial bases through architectural changes, we choose to teach the model these biases. By training with MAE, we can simplify or remove all of these bulky modules in existing transformers and increase accuracy in the process. The result is Hiera, an extremely efficient and simple architecture that outperforms the state - of - the - art in several image and video recognition tasks.
đ Documentation
Intended uses & limitations
Hiera can be used for image classification, feature extraction or masked image modeling. This checkpoint in specific is intended for Feature Extraction.
đģ Usage Examples
Basic Usage
from transformers import AutoImageProcessor, HieraModel
import torch
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("facebook/hiera-huge-224-hf")
model = HieraModel.from_pretrained("facebook/hiera-huge-224-hf")
inputs = image_processor(images=image, return_tensors="pt")
outputs = model(**inputs)
Advanced Usage
You can also extract feature maps from different stages of the model using HieraBackbone
and setting out_features
when loading the model. This is how you would extract feature maps from every stage:
from transformers import AutoImageProcessor, HieraBackbone
import torch
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("facebook/hiera-huge-224-hf")
model = HieraBackbone.from_pretrained("facebook/hiera-huge-224-hf", out_features=['stage1', 'stage2', 'stage3', 'stage4'])
inputs = image_processor(images=image, return_tensors="pt")
outputs = model(**inputs)
feature_maps = outputs.feature_maps
BibTeX entry and citation info
If you use Hiera or this code in your work, please cite:
@article{ryali2023hiera,
title={Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles},
author={Ryali, Chaitanya and Hu, Yuan-Ting and Bolya, Daniel and Wei, Chen and Fan, Haoqi and Huang, Po-Yao and Aggarwal, Vaibhav and Chowdhury, Arkabandhu and Poursaeed, Omid and Hoffman, Judy and Malik, Jitendra and Li, Yanghao and Feichtenhofer, Christoph},
journal={ICML},
year={2023}
}
đ License
The license for this project is cc - by - nc - 4.0.
Property |
Details |
Datasets |
imagenet - 1k |
Language |
en |
Library Name |
transformers |
License |
cc - by - nc - 4.0 |