🚀 Hiera Model (Tiny, fine-tuned on IN1K)
Hiera is a hierarchical vision transformer that is fast, powerful, and, above all, simple. It offers high - performance across a wide range of image and video tasks while maintaining high efficiency.
Property |
Details |
Datasets |
imagenet-1k |
Library Name |
transformers |
License |
cc-by-nc-4.0 |
🚀 Quick Start
Hiera is a revolutionary hierarchical vision transformer introduced in the paper Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles. It outperforms the state - of - the - art in various image and video tasks with remarkable speed.
✨ Features
How does it work?

Traditional vision transformers like ViT maintain the same spatial resolution and number of features throughout the network, which is inefficient. Early layers don't need many features, and later layers don't need high spatial resolution. Prior hierarchical models like ResNet addressed this by using fewer features at the start and less spatial resolution at the end.
Some domain - specific vision transformers, such as Swin or MViT, adopted hierarchical designs. However, in the pursuit of state - of - the - art results on ImageNet - 1K with fully supervised training, they added specialized modules to compensate for the lack of spatial biases in ViTs, making the models more complex and slower.
Hiera simplifies this process. Instead of adding spatial bases manually, it teaches the model these biases through training with MAE. This allows for the simplification or removal of all bulky modules in existing transformers, increasing accuracy in the process.
📚 Documentation
Intended uses & limitations
Hiera can be used for image classification, feature extraction or masked image modeling. This checkpoint in specific is intended for Feature Extraction.
💻 Usage Examples
Basic Usage
from transformers import AutoImageProcessor, HieraModel
import torch
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("facebook/hiera-tiny-224-hf")
model = HieraModel.from_pretrained("facebook/hiera-tiny-224-hf")
inputs = image_processor(images=image, return_tensors="pt")
outputs = model(**inputs)
Advanced Usage
You can also extract feature maps from different stages of the model using HieraBackbone
and setting out_features
when loading the model. This is how you would extract feature maps from every stage:
from transformers import AutoImageProcessor, HieraBackbone
import torch
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("facebook/hiera-tiny-224-hf")
model = HieraBackbone.from_pretrained("facebook/hiera-tiny-224-hf", out_features=['stage1', 'stage2', 'stage3', 'stage4'])
inputs = image_processor(images=image, return_tensors="pt")
outputs = model(**inputs)
feature_maps = outputs.feature_maps
BibTeX entry and citation info
If you use Hiera or this code in your work, please cite:
@article{ryali2023hiera,
title={Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles},
author={Ryali, Chaitanya and Hu, Yuan-Ting and Bolya, Daniel and Wei, Chen and Fan, Haoqi and Huang, Po-Yao and Aggarwal, Vaibhav and Chowdhury, Arkabandhu and Poursaeed, Omid and Hoffman, Judy and Malik, Jitendra and Li, Yanghao and Feichtenhofer, Christoph},
journal={ICML},
year={2023}
}