Model Overview
Model Features
Model Capabilities
Use Cases
🚀 Wan2.1
Wan2.1 is a comprehensive and open suite of video foundation models. It breaks the boundaries of video generation, offering high - performance solutions across multiple video - related tasks with support for consumer - grade GPUs.
🚀 Quick Start
📦 Installation
First, clone the repository:
git clone https://github.com/Wan-Video/Wan2.1.git
cd Wan2.1
Then, install the required dependencies:
# Ensure torch >= 2.4.0
pip install -r requirements.txt
📥 Model Download
You can download different models from Huggingface or ModelScope according to your needs. The following table shows the available models, their download links, and notes:
Property | Details |
---|---|
T2V-14B | 🤗 Huggingface 🤖 ModelScope. Supports both 480P and 720P. |
I2V-14B-720P | 🤗 Huggingface 🤖 ModelScope. Supports 720P. |
I2V-14B-480P | 🤗 Huggingface 🤖 ModelScope. Supports 480P. |
T2V-1.3B | 🤗 Huggingface 🤖 ModelScope. Supports 480P. |
FLF2V-14B | 🤗 Huggingface 🤖 ModelScope. Supports 720P. |
VACE-1.3B | 🤗 Huggingface 🤖 ModelScope. Supports 480P. |
VACE-14B | 🤗 Huggingface 🤖 ModelScope. Supports both 480P and 720P. |
⚠️ Important Note
- The 1.3B model can generate 720P videos, but due to limited training at this resolution, the results are generally less stable than at 480P. For optimal performance, we recommend using 480P resolution.
- For first - last frame to video generation, our model is mainly trained on Chinese text - video pairs. Therefore, we recommend using Chinese prompts for better results.
You can download models using huggingface-cli
:
pip install "huggingface_hub[cli]"
huggingface-cli download Wan-AI/Wan2.1-T2V-14B --local-dir ./Wan2.1-T2V-14B
Or using modelscope-cli
:
pip install modelscope
modelscope download Wan-AI/Wan2.1-T2V-14B --local_dir ./Wan2.1-T2V-14B
💻 Usage Examples
🏃♂️ Run Text - to - Video Generation
This repository supports two Text - to - Video models (1.3B and 14B) and two resolutions (480P and 720P). The following table shows the supported tasks, resolutions, and models:
Task | 480P | 720P | Model |
---|---|---|---|
t2v-14B | ✔️ | ✔️ | Wan2.1-T2V-14B |
t2v-1.3B | ✔️ | ❌ | Wan2.1-T2V-1.3B |
🔍 (1) Without Prompt Extension
- Single - GPU inference
python generate.py --task t2v-14B --size 1280*720 --ckpt_dir ./Wan2.1-T2V-14B --prompt "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage."
If you encounter OOM (Out - of - Memory) issues, you can use the --offload_model True
and --t5_cpu
options to reduce GPU memory usage. For example, on an RTX 4090 GPU:
python generate.py --task t2v-1.3B --size 832*480 --ckpt_dir ./Wan2.1-T2V-1.3B --offload_model True --t5_cpu --sample_shift 8 --sample_guide_scale 6 --prompt "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage."
💡 Usage Tip
If you are using the
T2V-1.3B
model, we recommend setting the parameter--sample_guide_scale 6
. The--sample_shift
parameter can be adjusted within the range of 8 to 12 based on the performance.
- Multi - GPU inference using FSDP + xDiT USP
We use FSDP and xDiT USP to accelerate inference.
- Ulysess Strategy
If you want to use the
Ulysses
strategy, you should set--ulysses_size $GPU_NUMS
. Note that thenum_heads
should be divisible byulysses_size
if you wish to use theUlysess
strategy. For the 1.3B model, thenum_heads
is12
, which can't be divided by 8 (as most multi - GPU machines have 8 GPUs). Therefore, it is recommended to use theRing Strategy
instead. - Ring Strategy
If you want to use the
Ring
strategy, you should set--ring_size $GPU_NUMS
. Note that thesequence length
should be divisible byring_size
when using theRing
strategy.
- Ulysess Strategy
If you want to use the
Of course, you can also combine the use of Ulysses
and Ring
strategies.
pip install "xfuser>=0.4.1"
torchrun --nproc_per_node=8 generate.py --task t2v-14B --size 1280*720 --ckpt_dir ./Wan2.1-T2V-14B --dit_fsdp --t5_fsdp --ulysses_size 8 --prompt "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage."
🔍 (2) Using Prompt Extension
Extending prompts can effectively enrich the details in generated videos and improve video quality. We provide the following two methods for prompt extension:
- Use the Dashscope API for extension.
- Apply for a
dashscope.api_key
in advance (EN | CN). - Configure the environment variable
DASH_API_KEY
to specify the Dashscope API key. For users of Alibaba Cloud's international site, you also need to set the environment variableDASH_API_URL
to 'https://dashscope-intl.aliyuncs.com/api/v1'. For more detailed instructions, please refer to the dashscope document. - Use the
qwen-plus
model for text - to - video tasks andqwen-vl-max
for image - to - video tasks. - You can modify the model used for extension... (The original text seems incomplete here)
- Apply for a
✨ Features
- 👍 SOTA Performance: Wan2.1 consistently outperforms existing open - source models and state - of - the - art commercial solutions across multiple benchmarks.
- 👍 Supports Consumer - grade GPUs: The T2V - 1.3B model requires only 8.19 GB VRAM, making it compatible with almost all consumer - grade GPUs. It can generate a 5 - second 480P video on an RTX 4090 in about 4 minutes (without optimization techniques like quantization). Its performance is even comparable to some closed - source models.
- 👍 Multiple Tasks: Wan2.1 excels in Text - to - Video, Image - to - Video, Video Editing, Text - to - Image, and Video - to - Audio, advancing the field of video generation.
- 👍 Visual Text Generation: Wan2.1 is the first video model capable of generating both Chinese and English text, featuring robust text generation that enhances its practical applications.
- 👍 Powerful Video VAE: Wan - VAE delivers exceptional efficiency and performance, encoding and decoding 1080P videos of any length while preserving temporal information, making it an ideal foundation for video and image generation.
🎥 Video Demos
🔥 Latest News!!
- May 14, 2025: 👋 We introduce Wan2.1 VACE, an all - in - one model for video creation and editing, along with its inference code, weights, and technical report!
- Apr 17, 2025: 👋 We introduce Wan2.1 FLF2V with its inference code and weights!
- Mar 21, 2025: 👋 We are excited to announce the release of the Wan2.1 technical report. We welcome discussions and feedback!
- Mar 3, 2025: 👋 Wan2.1's T2V and I2V have been integrated into Diffusers (T2V | I2V). Feel free to give it a try!
- Feb 27, 2025: 👋 Wan2.1 has been integrated into ComfyUI. Enjoy!
- Feb 25, 2025: 👋 We've released the inference code and weights of Wan2.1.
👥 Community Works
If your work has improved Wan2.1 and you would like more people to see it, please inform us.
- [Phantom](https://github.com/Phantom - video/Phantom) has developed a unified video generation framework for single and multi - subject references based on Wan2.1 - T2V - 1.3B. Please refer to [their examples](https://github.com/Phantom - video/Phantom).
- [UniAnimate - DiT](https://github.com/ali - vilab/UniAnimate - DiT), based on Wan2.1 - 14B - I2V, has trained a Human image animation model and has open - sourced the inference and training code. Feel free to enjoy it!
- [CFG - Zero](https://github.com/WeichenFan/CFG - Zero - star) enhances Wan2.1 (covering both T2V and I2V models) from the perspective of CFG.
- [TeaCache](https://github.com/ali - vilab/TeaCache) now supports Wan2.1 acceleration, capable of increasing speed by approximately 2x. Feel free to give it a try!
- [DiffSynth - Studio](https://github.com/modelscope/DiffSynth - Studio) provides more support for Wan2.1, including video - to - video, FP8 quantization, VRAM optimization, LoRA training, and more. Please refer to [their examples](https://github.com/modelscope/DiffSynth - Studio/tree/main/examples/wanvideo).
📋 Todo List
- Wan2.1 Text - to - Video
- [x] Multi - GPU Inference code of the 14B and 1.3B models
- [x] Checkpoints of the 14B and 1.3B models
- [x] Gradio demo
- [x] ComfyUI integration
- [x] Diffusers integration
- [ ] Diffusers + Multi - GPU Inference
- Wan2.1 Image - to - Video
- [x] Multi - GPU Inference code of the 14B model
- [x] Checkpoints of the 14B model
- [x] Gradio demo
- [x] ComfyUI integration
- [x] Diffusers integration
- [ ] Diffusers + Multi - GPU Inference
- Wan2.1 First - Last - Frame - to - Video
- [x] Multi - GPU Inference code of the 14B model
- [x] Checkpoints of the 14B model
- [x] Gradio demo
- [ ] ComfyUI integration
- [ ] Diffusers integration
- [ ] Diffusers + Multi - GPU Inference
- Wan2.1 VACE
- [x] Multi - GPU Inference code of the 14B and 1.3B models
- [x] Checkpoints of the 14B and 1.3B models
- [x] Gradio demo
- [x] ComfyUI integration
- [ ] Diffusers integration
- [ ] Diffusers + Multi - GPU Inference
📄 License
The project is licensed under the Apache - 2.0 license.
💜 Wan    |    🖥️ GitHub    |   🤗 Hugging Face   |   🤖 ModelScope   |    📑 Technical Report    |    📑 Blog    |   💬 WeChat Group   |    📖 Discord  
Wan: Open and Advanced Large - Scale Video Generative Models

