Model Overview
Model Features
Model Capabilities
Use Cases
๐ Anything V3 - Better VAE
Welcome to Anything V3 - Better VAE. This model currently comes in three formats: diffusers, ckpt, and safetensors. It ensures you'll never get a grey image result. Designed to generate high - quality, highly detailed anime - style images with just a few prompts, it also supports danbooru tags for image generation, similar to other anime - style Stable Diffusion models. For example, you can use tags like 1girl, white hair, golden eyes, beautiful eyes, detail, flower meadow, cumulonimbus clouds, lighting, detailed sky, garden.
๐ Quick Start
Gradio
We support a Gradio Web UI to run Anything V3 with Better VAE:
๐งจ Diffusers
This model can be used just like any other Stable Diffusion model. For more information, please refer to the Stable Diffusion. You can also export the model to ONNX, MPS and/or FLAX/JAX.
๐ฆ Installation
You need to install the following dependencies to run the pipeline:
pip install diffusers transformers accelerate scipy safetensors
๐ป Usage Examples
Basic Usage
Running the pipeline (if you don't swap the scheduler, it will run with the default DDIM. In this example, we are swapping it to DPMSolverMultistepScheduler):
from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler
model_id = "Linaqruf/anything-v3-0-better-vae"
# Use the DPMSolverMultistepScheduler (DPM-Solver++) scheduler here instead
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
prompt = "masterpiece, best quality, illustration, beautiful detailed, finely detailed, dramatic light, intricate details, 1girl, brown hair, green eyes, colorful, autumn, cumulonimbus clouds, lighting, blue sky, falling leaves, garden"
negative_prompt = "lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name"
with autocast("cuda"):
image = pipe(prompt,
negative_prompt=negative_prompt,
width=512,
height=640,
guidance_scale=12,
num_inference_steps=50).images[0]
image.save("anime_girl.png")
Examples
Here are some examples of images generated using this model:
- Anime Girl:
- Anime Boy:
- Scenery:
๐ License
This model is open access and available to all, with a CreativeML OpenRAIL - M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
- You can't use the model to deliberately produce nor share illegal or harmful outputs or content.
- The authors claim no rights on the outputs you generate. You are free to use them and are accountable for their use, which must not go against the provisions set in the license.
- You may re - distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL - M to all your users. Please read the full license here
Announcement
For the (unofficial) continuation of this model, please visit andite/anything-v4.0. I'm aware of this repo because I'm the one who (accidentally) gave the idea to publish the fine - tuned model ([andite/yohan - diffusion](https://huggingface.co/andite/yohan - diffusion)) as a base and merge it with many mysterious models, saying "hey, let's call it 'Anything V4.0'", as the quality is quite similar to Anything V3 but upgraded.
I also want to mention that I planned to remove/make private one of each repo named "Anything V3":
Since there are two versions now, and I've just realized that this mysterious non - sense model has polluted the Huggingface Trending for so long, and it's still there when new repos come out. I feel guilty every time this model is on the trending leaderboard.
I prefer to delete/make private this one and gradually move to Linaqruf/anything-v3-better-vae, which has better repo management and includes a better VAE in the model.
Please share your thoughts in this #133 discussion about whether I should delete this repo or another one, or maybe both of them.
Thanks, Linaqruf.
๐ Anything V3
Welcome to Anything V3, a latent diffusion model for anime enthusiasts. It can generate high - quality, highly detailed anime - style images with just a few prompts and supports danbooru tags for image generation, similar to other anime - style Stable Diffusion models. For example, you can use tags like 1girl, white hair, golden eyes, beautiful eyes, detail, flower meadow, cumulonimbus clouds, lighting, detailed sky, garden.
Gradio
We support a Gradio Web UI to run Anything - V3.0: Open in Spaces
๐งจ Diffusers
This model can be used just like any other Stable Diffusion model. For more information, please refer to the Stable Diffusion. You can also export the model to ONNX, MPS and/or FLAX/JAX.
๐ป Usage Examples
Basic Usage
from diffusers import StableDiffusionPipeline
import torch
model_id = "Linaqruf/anything-v3.0"
branch_name= "diffusers"
pipe = StableDiffusionPipeline.from_pretrained(model_id, revision=branch_name, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "pikachu"
image = pipe(prompt).images[0]
image.save("./pikachu.png")
Examples
Here are some examples of images generated using this model:
- Anime Girl:
1girl, brown hair, green eyes, colorful, autumn, cumulonimbus clouds, lighting, blue sky, falling leaves, garden
Steps: 50, Sampler: DDIM, CFG scale: 12
- Anime Boy:
1boy, medium hair, blonde hair, blue eyes, bishounen, colorful, autumn, cumulonimbus clouds, lighting, blue sky, falling leaves, garden
Steps: 50, Sampler: DDIM, CFG scale: 12
- Scenery:
scenery, shibuya tokyo, post-apocalypse, ruins, rust, sky, skyscraper, abandoned, blue sky, broken window, building, cloud, crane machine, outdoors, overgrown, pillar, sunset
Steps: 50, Sampler: DDIM, CFG scale: 12
๐ License
This model is open access and available to all, with a CreativeML OpenRAIL - M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
- You can't use the model to deliberately produce nor share illegal or harmful outputs or content.
- The authors claim no rights on the outputs you generate. You are free to use them and are accountable for their use, which must not go against the provisions set in the license.
- You may re - distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL - M to all your users. Please read the full license here