đ VideoMAE (large-sized model, pre-trained only)
A self-supervised pre-trained VideoMAE model on Kinetics-400 for 1600 epochs, introduced in the paper VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training by Tong et al. and first released in this repository.
Disclaimer: The team releasing VideoMAE did not write a model card for this model so this model card has been written by the Hugging Face team.
⨠Features
VideoMAE extends Masked Autoencoders (MAE) to video. Its architecture is similar to a standard Vision Transformer (ViT), with a decoder on top for predicting pixel values for masked patches.
Videos are presented as a sequence of fixed-size patches (resolution 16x16), linearly embedded. A [CLS] token is added at the start of the sequence for classification tasks. Fixed sinus/cosinus position embeddings are added before feeding the sequence to the Transformer encoder layers.
Through pre-training, the model learns an inner representation of videos, which can be used to extract features for downstream tasks. For example, with a labeled video dataset, a standard classifier can be trained by placing a linear layer on top of the pre-trained encoder, typically on the [CLS] token as its last hidden state represents the entire video.
đ Quick Start
You can use the raw model for predicting pixel values for masked patches of a video, but it's mostly intended to be fine-tuned on a downstream task. Check the model hub for fine-tuned versions on tasks that interest you.
đģ Usage Examples
Basic Usage
Here is how to use this model to predict pixel values for randomly masked patches:
from transformers import VideoMAEImageProcessor, VideoMAEForPreTraining
import numpy as np
import torch
num_frames = 16
video = list(np.random.randn(16, 3, 224, 224))
processor = VideoMAEImageProcessor.from_pretrained("MCG-NJU/videomae-large")
model = VideoMAEForPreTraining.from_pretrained("MCG-NJU/videomae-large")
pixel_values = processor(video, return_tensors="pt").pixel_values
num_patches_per_frame = (model.config.image_size // model.config.patch_size) ** 2
seq_length = (num_frames // model.config.tubelet_size) * num_patches_per_frame
bool_masked_pos = torch.randint(0, 2, (1, seq_length)).bool()
outputs = model(pixel_values, bool_masked_pos=bool_masked_pos)
loss = outputs.loss
For more code examples, refer to the documentation.
đ Documentation
Intended uses & limitations
You can use the raw model for predicting pixel values for masked patches of a video, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.
Training data
(to do, feel free to open a PR)
Training procedure
Preprocessing
(to do, feel free to open a PR)
Pretraining
(to do, feel free to open a PR)
Evaluation results
(to do, feel free to open a PR)
BibTeX entry and citation info
misc{https://doi.org/10.48550/arxiv.2203.12602,
doi = {10.48550/ARXIV.2203.12602},
url = {https://arxiv.org/abs/2203.12602},
author = {Tong, Zhan and Song, Yibing and Wang, Jue and Wang, Limin},
keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
đ License
This model is licensed under the "cc-by-nc-4.0" license.