๐ TIPO: Text to Image with text presampling for Prompt Optimization
An innovative framework enhancing text-to-image generative models by refining user prompts with LLMs.
๐ Quick Start
Use the updated version of the DTG extension (renamed to z - tipo - extension). The current version of z - tipo - extension supports stable - diffusion - webui, stable - diffusion - webui - forge, and ComfyUI. SD - Next hasn't been tested.
z - tipo - extension on GitHub
โจ Features
In this project, we introduce "TIPO" (Text to Image with text presampling for Prompt Optimization), an innovative framework designed to significantly enhance the quality and usability of Text - to - Image (T2I) generative models. TIPO utilizes the Large Language Models (LLMs) to perform "Text Presampling" within the inference pipeline of text - to - image generative modeling. By refining and extending user input prompts, TIPO enables generative models to produce superior results with minimal user effort, making T2I systems more accessible and effective for a wider range of users.

๐ง Technical Details
Model arch and Training
This model is LLaMA arch with 200M parameters, and the training data is a combined version of Danbooru2023 and Coyo - HD - 11M. The total token seen is around 50B tokens. For more information, please refer to the tech report and the following table.
Property |
TIPO - 200M |
TIPO - 200M - ft |
TIPO - 500M |
Model Type |
LLaMA |
LLaMA |
LLaMA |
Max ctx length |
1024 |
1024 |
1024 |
Batch Size |
2048 |
2048 |
3584 |
Training Data |
Danbooru, GBC10M, 5epoch Danbooru, GBC10M, Coyo11M, 3epoch |
Danbooru(pixtral), Coyo11M, 2epoch |
Danbooru, GBC10M, Coyo11M, 5epoch |
Real Token Seen |
40B token |
50B (10B more from TIPO - 200M) |
30B token |
Training Hardware |
RTX 3090 x 4 |
RTX 3090 x 4 |
H100 x 8 |
Training Time |
420 hour` |
120 hour` |
100 hour` |
Huggingface |
KBlueLeaf/TIPO - 200M ยท Hugging Face |
KBlueLeaf/TIPO - 200M - ft ยท Hugging Face |
You Are HERE |
*: We only count "non - padding token" in the token seen, since all the training data have a very large length range.
`: Since the training data is pretty short, it takes more time to reach the same token seen than general LLM pretraining. As a reference, with 4096 as the max ctx length and almost all the data reaching that length, you may only need 2 days to reach 10B token seen on RTX 3090 x 4 with a 200M model.
Evaluation
Evaluation is done on the TIPO - 200M model
We have tested TIPO compared to other models in several tests and metrics:
Scenery tag test
In this test, we use a single "scenery" tag as input (with some certain meta). To test each prompt gen method to see if they can obtain the desired distribution of outputs while maintaining the quality of images.
Scenery Tag Test |
Original |
GPT4o - mini |
Prompt DB |
Promptis |
TIPO(ours) |
FDD โ |
0.3558 |
0.5414 |
0.3247 |
0.2350 |
0.2282 |
Aesthetic โ |
5.0569 |
6.3676 |
6.1609 |
5.9468 |
6.2571 |
AI Corrupt โ |
0.4257 |
0.7490 |
0.5024 |
0.5669 |
0.9195 |
Short/Truncated Long test
In this test, we use short captions or manually truncated captions from GBC10M and CoyoHD11M. This test examines the ability of the prompt gen method to handle almost completed prompts.
Short |
Original |
GPT4o - mini |
Prompt DB |
Promptis |
TIPO(ours) |
FDD โ |
0.0957 |
0.1668 |
0.0980 |
0.1783 |
0.1168 |
Aesthetic โ |
5.8370 |
6.0589 |
5.8213 |
5.7963 |
5.8531 |
AI Corrupt โ |
0.7113 |
0.6985 |
0.7064 |
0.6314 |
0.7131 |
Truncated Long |
Original |
GPT4o - mini |
Prompt DB |
Promptis |
TIPO(ours) |
FDD โ |
0.0955 |
0.1683 |
0.1247 |
0.2096 |
0.1210 |
Aesthetic โ |
5.7497 |
6.0168 |
5.8191 |
5.7759 |
5.8364 |
AI Corrupt โ |
0.6868 |
0.6712 |
0.6741 |
0.5925 |
0.7130 |
๐ License
This model is released under Kohaku License 1.0. You can check the above - provided URL or check the LICENSE file in this repo.
Citation
@misc{yeh2024tipotextimagetext,
title={TIPO: Text to Image with Text Presampling for Prompt Optimization},
author={Shih - Ying Yeh and Sang - Hyun Park and Giyeong Oh and Min Song and Youngjae Yu},
year={2024},
eprint={2411.08127},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2411.08127},
}