🚀 Qwenfluxprompt
Qwenfluxprompt是一個用於Wan2.1 14b視頻生成模型的LoRA,可與diffusers或ComfyUI結合使用,支持文本到視頻和圖像到視頻的Wan2.1模型,助力視頻生成。
🚀 快速開始
關於此LoRA
這是一個適用於Wan2.1 14b視頻生成模型的LoRA。它可以與diffusers或ComfyUI一起使用,並且可以同時加載到文本到視頻和圖像到視頻的Wan2.1模型中。它是在Replicate上使用AI工具包(https://replicate.com/ostris/wan-lora-trainer/train )進行訓練的。
觸發詞
你應該使用COLTOK
來觸發視頻生成。
使用此LoRA
Replicate有一系列針對速度和成本進行了優化的Wan2.1模型。它們也可以與此LoRA一起使用:
- https://replicate.com/collections/wan-video
- https://replicate.com/fofr/wan2.1-with-lora
💻 使用示例
基礎用法
使用Replicate的API運行此LoRA
import replicate
input = {
"prompt": "COLTOK",
"lora_url": "https://huggingface.co/mam33/qwenfluxprompt/resolve/main/wan2.1-14b-coltok-lora.safetensors"
}
output = replicate.run(
"fofr/wan2.1-with-lora:f83b84064136a38415a3aff66c326f94c66859b8ad7a2cb432e2822774f07b08",
model="14b",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.mp4", "wb") as file:
file.write(item.read())
與Diffusers一起使用
pip install git+https://github.com/huggingface/diffusers.git
高級用法
import torch
from diffusers.utils import export_to_video
from diffusers import AutoencoderKLWan, WanPipeline
from diffusers.schedulers.scheduling_unipc_multistep import UniPCMultistepScheduler
model_id = "Wan-AI/Wan2.1-T2V-14B-Diffusers"
vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
pipe = WanPipeline.from_pretrained(model_id, vae=vae, torch_dtype=torch.bfloat16)
flow_shift = 3.0
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config, flow_shift=flow_shift)
pipe.to("cuda")
pipe.load_lora_weights("mam33/qwenfluxprompt")
pipe.enable_model_cpu_offload()
prompt = "COLTOK"
negative_prompt = "Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards"
output = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
height=480,
width=832,
num_frames=81,
guidance_scale=5.0,
).frames[0]
export_to_video(output, "output.mp4", fps=16)
🔧 技術細節
- 步數:2000
- 學習率:0.0001
- LoRA秩:32
🤝 貢獻你自己的示例
你可以使用社區板塊來添加展示你使用此LoRA所創作內容的視頻。
📄 許可證
本項目採用Apache-2.0許可證。