๐ VGen
VGen is an open - source video synthesis codebase developed by the Tongyi Lab of Alibaba Group. It features state - of - the - art video generative models and can generate high - quality videos from various inputs.
๐ Quick Start
VGen can produce high - quality videos from input text, images, desired motion, desired subjects, and feedback signals. It also offers a variety of commonly used video generation tools such as visualization, sampling, training, inference, joint training using images and videos, acceleration, etc.
Train your text - to - video model
Executing the following command to enable distributed training is straightforward:
python train_net.py --cfg configs/t2v_train.yaml
In the t2v_train.yaml
configuration file, you can specify the data, adjust the video - to - image ratio using frame_lens
, and validate your ideas with different Diffusion settings, etc.
- Before the training, you can download any of our open - source models for initialization. Our codebase supports custom initialization and
grad_scale
settings, all of which are included in the Pretrain
item in the yaml file.
- During the training, you can view the saved models and intermediate inference results in the
workspace/experiments/t2v_train
directory.
After the training is completed, you can perform inference on the model using the following command:
python inference.py --cfg configs/t2v_infer.yaml
Then you can find the generated videos in the workspace/experiments/test_img_01
directory. For specific configurations such as data, models, seed, etc., please refer to the t2v_infer.yaml
file.
Run the I2VGen - XL model
(i) Download model and test data:
!pip install modelscope
from modelscope.hub.snapshot_download import snapshot_download
model_dir = snapshot_download('damo/I2VGen-XL', cache_dir='models/', revision='v1.0.0')
(ii) Run the following command:
python inference.py --cfg configs/i2vgen_xl_infer.yaml
In a few minutes, you can retrieve the high - definition video you wish to create from the workspace/experiments/test_img_01
directory. At present, we find that the current model performs inadequately on anime images and images with a black background due to the lack of relevant training data. We are consistently working to optimize it.
Other methods
In preparation.
โจ Features
- Expandability, allowing for easy management of your own experiments.
- Completeness, encompassing all common components for video generation.
- Excellent performance, featuring powerful pre - trained models in multiple tasks.
๐ฆ Installation
conda create -n vgen python=3.8
conda activate vgen
pip install torch==1.12.0+cu113 torchvision==0.13.0+cu113 torchaudio==0.12.0 --extra - index - url https://download.pytorch.org/whl/cu113
pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
Datasets
We have provided a demo dataset that includes images and videos, along with their lists in data
.
Please note that the demo images used here are for testing purposes and were not included in the training.
Clone codebase
git clone https://github.com/damo - vilab/i2vgen - xl.git
cd i2vgen - xl
๐ป Usage Examples
Basic Usage
Train your text - to - video model
python train_net.py --cfg configs/t2v_train.yaml
Run the I2VGen - XL model
python inference.py --cfg configs/i2vgen_xl_infer.yaml
Advanced Usage
Integration of I2VGenXL with ๐งจ diffusers
import torch
from diffusers import I2VGenXLPipeline
from diffusers.utils import load_image, export_to_gif
repo_id = "ali - vilab/i2vgen - xl"
pipeline = I2VGenXLPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, variant="fp16").to("cuda")
image_url = "https://github.com/ali - vilab/i2vgen - xl/blob/main/data/test_images/img_0009.png?download=true"
image = load_image(image_url).convert("RGB")
prompt = "Papers were floating in the air on a table in the library"
generator = torch.manual_seed(8888)
frames = pipeline(
prompt=prompt,
image=image,
generator=generator
).frames[0]
print(export_to_gif(frames))
Find the official documentation here.
๐ Documentation
Customize your own approach
Our codebase essentially supports all the commonly used components in video generation. You can manage your experiments flexibly by adding corresponding registration classes, including ENGINE, MODEL, DATASETS, EMBEDDER, AUTO_ENCODER, DISTRIBUTION, VISUAL, DIFFUSION, PRETRAIN
, and can be compatible with all our open - source algorithms according to your own needs. If you have any questions, feel free to give us your feedback at any time.
๐ฅ News!!!
- [2023.12] We release the high - efficiency video generation method VideoLCM
- [2023.12] We release the code and model of I2VGen - XL and the ModelScope T2V
- [2023.12] We release the T2V method [HiGen](https://higen - t2v.github.io) and customizing T2V method [DreamVideo](https://dreamvideo - t2v.github.io).
- [2023.12] We write an introduction docment for VGen and compare I2VGen - XL with SVD.
- [2023.11] We release a high - quality I2VGen - XL model, please refer to the [Webpage](https://i2vgen - xl.github.io)
TODO
- [x] Release the technical papers and webpage of [I2VGen - XL](doc/i2vgen - xl.md)
- [x] Release the code and pretrained models that can generate 1280x720 videos
- [ ] Release models optimized specifically for the human body and faces
- [ ] Updated version can fully maintain the ID and capture large and accurate motions simultaneously
- [ ] Release other methods and the corresponding models
BibTeX
If this repo is useful to you, please cite our corresponding technical paper.
@article{2023i2vgenxl,
title={I2VGen - XL: High - Quality Image - to - Video Synthesis via Cascaded Diffusion Models},
author={Zhang, Shiwei and Wang, Jiayu and Zhang, Yingya and Zhao, Kang and Yuan, Hangjie and Qing, Zhiwu and Wang, Xiang and Zhao, Deli and Zhou, Jingren},
booktitle={arXiv preprint arXiv:2311.04145},
year={2023}
}
@article{2023videocomposer,
title={VideoComposer: Compositional Video Synthesis with Motion Controllability},
author={Wang, Xiang and Yuan, Hangjie and Zhang, Shiwei and Chen, Dayou and Wang, Jiuniu, and Zhang, Yingya, and Shen, Yujun, and Zhao, Deli and Zhou, Jingren},
booktitle={arXiv preprint arXiv:2306.02018},
year={2023}
}
@article{wang2023modelscope,
title={Modelscope text - to - video technical report},
author={Wang, Jiuniu and Yuan, Hangjie and Chen, Dayou and Zhang, Yingya and Wang, Xiang and Zhang, Shiwei},
journal={arXiv
๐ License
This project is licensed under the MIT license.

VGen includes implementations of the following methods:
