đ Flux-based Image Generation Model
This is an open-source and commercially available base model that offers high-quality image generation. It is based on the FLUX.1 series and has achieved excellent performance in various aspects such as image quality, detail, prompt following, and style diversity.
đ Quick Start
Model Features
- Based on FLUX.1-schnell: Merged with LibreFLUX and finetuned using tools like ComfyUI, Block_Patcher_ComfyUI, and ComfyUI_essentials.
- Fast Image Generation: Recommended 4 - 8 steps, usually step 4 is sufficient. It can generate high - quality images quickly compared to other Flux.1 Schnell models.
- Style Preservation: Follows the original Flux Schnell or Flux.1 Dev style, with strong prompt following ability.
Installation and Usage
- Required Models:
- UNET versions (Model only) need Text Encoders and VAE. It is recommended to use the following CLIP and Text Encoder models for better prompt guidance:
- Long CLIP: https://huggingface.co/zer0int/CLIP - GmP - ViT - L - 14/blob/main/ViT - L - 14 - TEXT - detail - improved - hiT - GmP - TE - only - HF.safetensors
- Text Encoders: https://huggingface.co/silveroxides/CLIP - Collection/blob/main/t5xxl_flan_latest - fp8_e4m3fn.safetensors
- VAE: https://huggingface.co/black - forest - labs/FLUX.1 - schnell/tree/main/vae
- GGUF Version: You need to install GGUF model support nodes from https://github.com/city96/ComfyUI - GGUF
- Sample Workflow: A very simple workflow is shown in the image below. No other comfy custom nodes are needed (For the GGUF version, please use the UNET Loader(GGUF) node of city96's).

⨠Features
Schnell - Based Model
- Wash away the distillation and return to the original basic. It may be the best - balanced open - source and commercially available Schnell base model among various models based on Flux.1 Schnell. It can generate images quickly (4 - 8 steps), follows the original Flux Schnell composition style, has strong prompt following ability, and achieves the best balance in image quality, details, reality, and style diversity.

Flux.1 Dev - Based Model
- May be the Best Quality Step 6 - 10 Model. In some details, it surpasses the Flux.1 Dev model and approaches the Flux.1 Pro model. It follows the original Flux.1 Dev style, has strong prompt following ability, and offers the best image quality among Flux finetuned models for fast image generation (within 10 steps).
Quantized Model
- GGUF Q8_0 / Q5_1 / Q4_1 quantized model files have been tested and uploaded. No other quantization will be provided as over - quantization will lose the advantages of this high - speed and high - precision model. You can download the FP8 model file and quantize it according to the following tips.
đĻ Installation
If you want to use the GGUF version, you need to install GGUF model support nodes from https://github.com/city96/ComfyUI - GGUF.
đģ Usage Examples
Basic Usage
For the basic usage of the model, you can refer to the sample workflow image and the recommended models above.
Advanced Usage
If you want to convert the model to GGUF Q5/Q4, you can use the https://github.com/ruSauron/to - gguf - bat script. Download it and put it in the same directory as the sd.exe file. Then, just drag the fp8.safetensors model file to the bat file in the explorer, a CMD window will pop up, and you can follow the menu to convert the model you want.
đ Documentation
Model References
Thanks
- https://huggingface.co/black - forest - labs/FLUX.1 - dev, A very good open - source T2I model under the FLUX.1 [dev] Non - Commercial License.
- https://huggingface.co/black - forest - labs/FLUX.1 - schnell, A very good open - source T2I model under the apache - 2.0 licence.
- https://huggingface.co/Anibaaal, Flux - Fusion is a very good mix and tuned model.
- https://huggingface.co/nyanko7, Flux - dev - de - distill is a great experimental project! Thanks for the [inference.py](https://huggingface.co/nyanko7/flux - dev - de - distill/blob/main/inference.py) scripts.
- https://huggingface.co/jimmycarter/LibreFLUX, A free, de - distilled FLUX model, is an Apache 2.0 version of FLUX.1 - schnell.
- https://huggingface.co/MonsterMMORPG, Furkan share a lot of Flux.1 model testing and tuning courses, some special test for the de - distill model.
- https://github.com/cubiq/Block_Patcher_ComfyUI, cubiq's Flux blocks patcher sampler let me do a lot of test to know how the Flux.1 block parameter value change the image gerentrating. His ComfyUI_essentials have a FluxBlocksBuster node, let me can adjust the blocks value easy. That is a great work!
- https://huggingface.co/twodgirl, Share the model quantization script and the test dataset.
- https://huggingface.co/John6666, Share the model convert script and the model collections.
- https://github.com/city96/ComfyUI - GGUF, Native support GGUF Quantization Model.
- https://github.com/leejet/stable - diffusion.cpp, Provider pure C/C++ GGUF model convert scripts.
đ License
The weights fall under the [FLUX.1 [dev]](https://huggingface.co/black - forest - labs/FLUX.1 - dev/blob/main/LICENSE.md) Non - Commercial License.