TDD
TDD is a consistency distillation method through target timestep selection and decoupled guidance, significantly reducing the inference steps required to generate high-quality images (only 4-8 steps).
Downloads 236
Release Time : 8/28/2024
Model Overview
TDD is an advanced distillation technique designed to accelerate the text-to-image generation process. Through innovative target timestep selection strategies and decoupled guidance methods, it significantly reduces inference steps while maintaining image quality.
Model Features
Target Timestep Selection Strategy
Employs a refined target timestep selection strategy to enhance training efficiency, selecting from predefined uniform denoising schedules and adding random offsets to accommodate non-deterministic sampling.
Decoupled Guidance Training
Uses decoupled guidance during training, supporting post-adjustment of guidance scales during inference, aligning with the standard training process of CFG by replacing partial text conditions with empty prompts.
Flexible Sampling Options
Offers optional non-uniform sampling and x0 cropping for more flexible and precise image sampling.
Fast Inference
Generates high-quality images in just 4-8 steps, significantly improving generation speed.
Model Capabilities
Text-to-Image Generation
Fast Image Generation
High-Quality Image Generation
Supports Multiple Base Models
Use Cases
Creative Design
Art Creation
Quickly generate art-style images
High-quality artworks generated in just 4-8 steps
Concept Design
Quickly generate product concept images
Efficiently generate diverse design concepts
Content Generation
Social Media Content
Quickly generate social media images
Efficiently generate engaging visual content
Advertising Materials
Quickly generate advertising creative images
Rapidly iterate multiple advertising design solutions
Featured Recommended AI Models
Š 2025AIbase