V

Videomaev2 Base

Developed by OpenGVLab
VideoMAEv2-Base is a self-supervised video feature extraction model that employs a dual masking mechanism pre-trained on the UnlabeldHybrid-1M dataset.
Downloads 3,565
Release Time : 1/14/2025

Model Overview

This model learns video feature representations through self-supervision and can be applied to downstream tasks such as video classification.

Model Features

Dual masking mechanism
Utilizes an innovative dual masking strategy to enhance video representation learning
Self-supervised pre-training
Pre-trained on the UnlabeldHybrid-1M dataset via self-supervision
Video feature extraction
Feature extraction capability specifically optimized for video data

Model Capabilities

Video feature extraction
Video representation learning

Use Cases

Video analysis
Video classification
Extract video features for classification tasks
Video retrieval
Content-based video retrieval systems
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase