🚀 Qwenfluxprompt
Qwenfluxprompt是一个用于Wan2.1 14b视频生成模型的LoRA,可与diffusers或ComfyUI结合使用,支持文本到视频和图像到视频的Wan2.1模型,助力视频生成。
🚀 快速开始
关于此LoRA
这是一个适用于Wan2.1 14b视频生成模型的LoRA。它可以与diffusers或ComfyUI一起使用,并且可以同时加载到文本到视频和图像到视频的Wan2.1模型中。它是在Replicate上使用AI工具包(https://replicate.com/ostris/wan-lora-trainer/train )进行训练的。
触发词
你应该使用COLTOK
来触发视频生成。
使用此LoRA
Replicate有一系列针对速度和成本进行了优化的Wan2.1模型。它们也可以与此LoRA一起使用:
- https://replicate.com/collections/wan-video
- https://replicate.com/fofr/wan2.1-with-lora
💻 使用示例
基础用法
使用Replicate的API运行此LoRA
import replicate
input = {
"prompt": "COLTOK",
"lora_url": "https://huggingface.co/mam33/qwenfluxprompt/resolve/main/wan2.1-14b-coltok-lora.safetensors"
}
output = replicate.run(
"fofr/wan2.1-with-lora:f83b84064136a38415a3aff66c326f94c66859b8ad7a2cb432e2822774f07b08",
model="14b",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.mp4", "wb") as file:
file.write(item.read())
与Diffusers一起使用
pip install git+https://github.com/huggingface/diffusers.git
高级用法
import torch
from diffusers.utils import export_to_video
from diffusers import AutoencoderKLWan, WanPipeline
from diffusers.schedulers.scheduling_unipc_multistep import UniPCMultistepScheduler
model_id = "Wan-AI/Wan2.1-T2V-14B-Diffusers"
vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
pipe = WanPipeline.from_pretrained(model_id, vae=vae, torch_dtype=torch.bfloat16)
flow_shift = 3.0
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config, flow_shift=flow_shift)
pipe.to("cuda")
pipe.load_lora_weights("mam33/qwenfluxprompt")
pipe.enable_model_cpu_offload()
prompt = "COLTOK"
negative_prompt = "Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards"
output = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
height=480,
width=832,
num_frames=81,
guidance_scale=5.0,
).frames[0]
export_to_video(output, "output.mp4", fps=16)
🔧 技术细节
- 步数:2000
- 学习率:0.0001
- LoRA秩:32
🤝 贡献你自己的示例
你可以使用社区板块来添加展示你使用此LoRA所创作内容的视频。
📄 许可证
本项目采用Apache-2.0许可证。