🚀 Stable Diffusion v1-5 Model Card
Stable Diffusion is a latent text-to-image diffusion model that can generate photo-realistic images based on any text input. For more details on how Stable Diffusion works, refer to 🤗's Stable Diffusion blog.
🚀 Quick Start
You can use this model with both the 🧨Diffusers library and the RunwayML GitHub repository.
Diffusers
from diffusers import StableDiffusionPipeline
import torch
model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
For more detailed instructions, use-cases, and examples in JAX, follow the instructions here.
Original GitHub Repository
- Download the weights:
- Follow the instructions here.
✨ Features
- Text-to-Image Generation: Capable of generating photo-realistic images from any text input.
- Fine-Tuning: The Stable-Diffusion-v1-5 checkpoint was initialized with the weights of Stable-Diffusion-v1-2 and fine-tuned on specific datasets.
📚 Documentation
Model Details
Property |
Details |
Developed by |
Robin Rombach, Patrick Esser |
Model Type |
Diffusion-based text-to-image generation model |
Language(s) |
English |
License |
The CreativeML OpenRAIL M license, an Open RAIL M license, adapted from the work of BigScience and the RAIL Initiative. See also the article about the BLOOM Open RAIL license. |
Model Description |
A model for generating and modifying images based on text prompts. It's a Latent Diffusion Model using a fixed, pretrained text encoder (CLIP ViT-L/14) as suggested in the Imagen paper. |
Resources for more information |
GitHub Repository, Paper. |
Cite as |
@InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } |
Uses
Direct Use
The model is for research purposes only. Possible research areas and tasks include:
- Safe deployment of models with the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
Misuse, Malicious Use, and Out-of-Scope Use
Note: This section is taken from the DALLE-MINI model card, but applies equally to Stable Diffusion v1.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating disturbing, distressing, or offensive images, or content that propagates historical or current stereotypes.
Out-of-Scope Use
The model was not trained to provide factual or true representations of people or events. Thus, using it to generate such content is beyond its capabilities.
Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without the consent of those who might see it.
- Mis- and disinformation.
- Representations of egregious violence and gore.
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
Limitations and Bias
Limitations
- The model doesn't achieve perfect photorealism.
- It can't render legible text.
- It performs poorly on more complex tasks involving compositionality, like rendering an image of “A red cube on top of a blue sphere”.
- Faces and people may not be generated accurately.
- The model was mainly trained with English captions and works less effectively in other languages.
- The autoencoding part of the model is lossy.
- The model was trained on the large-scale dataset LAION-5B, which contains adult material and isn't suitable for product use without additional safety measures.
- No additional measures were used to deduplicate the dataset, resulting in some memorization of duplicated images in the training data. The training data can be searched at https://rom1504.github.io/clip-retrieval/ to detect memorized images.
Bias
Although image generation models are impressive, they can reinforce or exacerbate social biases. Stable Diffusion v1 was trained on subsets of LAION-2B(en), which mainly consists of images with English descriptions. Texts and images from non-English communities and cultures are likely underrepresented, affecting the model's output. Western cultures are often the default, and the model performs significantly worse with non-English prompts.
Safety Module
The model is intended to be used with the Safety Checker in Diffusers. This checker compares model outputs against known hard-coded NSFW concepts. The concepts are hidden to prevent reverse-engineering. Specifically, it compares the class probability of harmful concepts in the embedding space of the CLIPTextModel
after image generation, using hand-engineered weights for each NSFW concept.
Training
Training Data
The model was trained on the following dataset:
- LAION-2B (en) and subsets thereof.
Training Procedure
Stable Diffusion v1-5 is a latent diffusion model that combines an autoencoder with a diffusion model trained in the autoencoder's latent space. During training:
- Images are encoded by an encoder into latent representations. The autoencoder uses a relative downsampling factor of 8, mapping images of shape H x W x 3 to latents of shape H/f x W/f x 4.
- Text prompts are encoded by a ViT-L/14 text-encoder.
- The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise added to the latent and the UNet's prediction.
Currently, six Stable Diffusion checkpoints are available, trained as follows:
-
stable-diffusion-v1-1
: 237,000 steps at resolution 256x256
on laion2B-en, then 194,000 steps at resolution 512x512
on laion-high-resolution (170M examples from LAION-5B with resolution >= 1024x1024
).
-
stable-diffusion-v1-2
: Resumed from stable-diffusion-v1-1
, 515,000 steps at resolution 512x512
on "laion-improved-aesthetics" (a subset of laion2B-en, filtered to images with an original size >= 512x512
, estimated aesthetics score > 5.0
, and an estimated watermark probability < 0.5
).
-
stable-diffusion-v1-3
: Resumed from stable-diffusion-v1-2
, 195,000 steps at resolution 512x512
on "laion-improved-aesthetics" with 10% dropping of the text-conditioning to improve classifier-free guidance sampling.
-
stable-diffusion-v1-4
: Resumed from stable-diffusion-v1-2
, 225,000 steps at resolution 512x512
on "laion-aesthetics v2 5+" with 10% dropping of the text-conditioning to improve classifier-free guidance sampling.
-
stable-diffusion-v1-5
: Resumed from stable-diffusion-v1-2
, 595,000 steps at resolution 512x512
on "laion-aesthetics v2 5+" with 10% dropping of the text-conditioning to improve classifier-free guidance sampling.
-
stable-diffusion-inpainting
: Resumed from stable-diffusion-v1-5
, then 440,000 steps of inpainting training at resolution 512x512 on “laion-aesthetics v2 5+” with 10% dropping of the text-conditioning.
-
Hardware: 32 x 8 x A100 GPUs
-
Optimizer: AdamW
-
Gradient Accumulations: 2
-
Batch: 32 x 8 x 2 x 4 = 2048
-
Learning rate: Warmed up to 0.0001 for 10,000 steps and then kept constant.
Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0) and 50 PNDM/PLMS sampling steps show the relative improvements of the checkpoints:
![pareto](https://huggingface.co/CompVis/stable-diffusion/resolve/m
📄 License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
- You can't use the model to deliberately produce nor share illegal or harmful outputs or content.
- CompVis claims no rights on the outputs you generate. You are free to use them and are accountable for their use, which must not violate the license provisions.
- You may re-distribute the weights and use the model commercially and/or as a service. If you do, include the same use restrictions as in the license and share a copy of the CreativeML OpenRAIL-M with all your users.
Please read the full license carefully here.