đ X-CLIP (base-sized model)
X-CLIP is a minimal extension of CLIP for general video - language understanding. Trained on Kinetics - 400, it can be used for tasks like zero - shot, few - shot or fully supervised video classification and video - text retrieval.
đ Quick Start
If you want to use the raw model to determine how well text goes with a given video, you can refer to the model hub to find fine - tuned versions for your interested tasks. For code examples, check the documentation.
⨠Features
- General Video - Language Understanding: X - CLIP is an extended version of CLIP, trained in a contrastive way on (video, text) pairs, enabling it to handle various video - related tasks.
- Multiple Task Support: It can be used for zero - shot, few - shot or fully supervised video classification and video - text retrieval.
đĻ Installation
No installation steps are provided in the original document, so this section is skipped.
đģ Usage Examples
No code examples are provided in the original document, so this section is skipped.
đ Documentation
Model description
X - CLIP is a minimal extension of CLIP for general video - language understanding. The model is trained in a contrastive way on (video, text) pairs.

This allows the model to be used for tasks like zero - shot, few - shot or fully supervised video classification and video - text retrieval.
Intended uses & limitations
You can use the raw model for determining how well text goes with a given video. See the model hub to look for fine - tuned versions on a task that interests you.
Training data
This model was trained on Kinetics 400.
Preprocessing
The exact details of preprocessing during training can be found here.
The exact details of preprocessing during validation can be found here.
During validation, one resizes the shorter edge of each frame, after which center cropping is performed to a fixed - size resolution (like 224x224). Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation.
Evaluation results
Property |
Details |
Model Type |
X - CLIP (base - sized model, patch resolution of 16) |
Training Data |
Kinetics 400 |
HMDB - 51 Top - 1 Accuracy |
44.6% |
UCF - 101 Top - 1 Accuracy |
72.0% |
Kinetics - 600 Top - 1 Accuracy |
65.2% |
đ§ Technical Details
X - CLIP model (base - sized, patch resolution of 16) was trained on [Kinetics - 400](https://www.deepmind.com/open - source/kinetics). It was introduced in the paper Expanding Language - Image Pretrained Models for General Video Recognition by Ni et al. and first released in [this repository](https://github.com/microsoft/VideoX/tree/master/X - CLIP).
This model was trained using 32 frames per video, at a resolution of 224x224.
đ License
This model is released under the MIT license.