🚀 Diffusers - 文本到视频生成库
Diffusers库中的AnimateDiff方法能够借助现有的Stable Diffusion文本到图像模型来创建视频。它通过在冻结的文本到图像模型中插入运动模块层,并在视频片段上进行训练以提取运动先验,从而实现视频生成。
🚀 快速开始
AnimateDiff允许你利用已有的Stable Diffusion文本到图像模型来创建视频。其实现方式是在冻结的文本到图像模型中插入运动模块层,并在视频片段上进行训练,以提取运动先验。这些运动模块被应用于Stable Diffusion UNet的ResNet和注意力块之后,旨在引入跨图像帧的连贯运动。为了支持这些模块,我们引入了MotionAdapter和UNetMotionModel的概念,它们为在现有Stable Diffusion模型中使用这些运动模块提供了便利。
SparseControlNetModel是针对AnimateDiff实现的ControlNet。ControlNet由Lvmin Zhang、Anyi Rao和Maneesh Agrawala在Adding Conditional Control to Text-to-Image Diffusion Models中提出。而ControlNet的SparseCtrl版本由Yuwei Guo、Ceyuan Yang、Anyi Rao、Maneesh Agrawala、Dahua Lin和Bo Dai在SparseCtrl: Adding Sparse Controls to Text-to-Video Diffusion Models中提出,用于在文本到视频扩散模型中实现可控生成。
✨ 主要特性
- 利用现有模型:借助已有的Stable Diffusion文本到图像模型生成视频。
- 运动模块:通过插入运动模块层引入跨图像帧的连贯运动。
- SparseControlNetModel:实现对视频生成的可控性。
💻 使用示例
基础用法
import torch
from diffusers import AnimateDiffSparseControlNetPipeline
from diffusers.models import AutoencoderKL, MotionAdapter, SparseControlNetModel
from diffusers.schedulers import DPMSolverMultistepScheduler
from diffusers.utils import export_to_gif, load_image
model_id = "SG161222/Realistic_Vision_V5.1_noVAE"
motion_adapter_id = "guoyww/animatediff-motion-adapter-v1-5-3"
controlnet_id = "guoyww/animatediff-sparsectrl-scribble"
lora_adapter_id = "guoyww/animatediff-motion-lora-v1-5-3"
vae_id = "stabilityai/sd-vae-ft-mse"
device = "cuda"
motion_adapter = MotionAdapter.from_pretrained(motion_adapter_id, torch_dtype=torch.float16).to(device)
controlnet = SparseControlNetModel.from_pretrained(controlnet_id, torch_dtype=torch.float16).to(device)
vae = AutoencoderKL.from_pretrained(vae_id, torch_dtype=torch.float16).to(device)
scheduler = DPMSolverMultistepScheduler.from_pretrained(
model_id,
subfolder="scheduler",
beta_schedule="linear",
algorithm_type="dpmsolver++",
use_karras_sigmas=True,
)
pipe = AnimateDiffSparseControlNetPipeline.from_pretrained(
model_id,
motion_adapter=motion_adapter,
controlnet=controlnet,
vae=vae,
scheduler=scheduler,
torch_dtype=torch.float16,
).to(device)
pipe.load_lora_weights(lora_adapter_id, adapter_name="motion_lora")
pipe.fuse_lora(lora_scale=1.0)
prompt = "an aerial view of a cyberpunk city, night time, neon lights, masterpiece, high quality"
negative_prompt = "low quality, worst quality, letterboxed"
image_files = [
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-scribble-1.png",
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-scribble-2.png",
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-scribble-3.png"
]
condition_frame_indices = [0, 8, 15]
conditioning_frames = [load_image(img_file) for img_file in image_files]
video = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
num_inference_steps=25,
conditioning_frames=conditioning_frames,
controlnet_conditioning_scale=1.0,
controlnet_frame_indices=condition_frame_indices,
generator=torch.Generator().manual_seed(1337),
).frames[0]
export_to_gif(video, "output.gif")
以下是使用上述代码生成的示例: