đ logo-lora
This is a standard PEFT LoRA derived from black-forest-labs/FLUX.1-dev, which is designed to generate logos for car racing game apps.
đ Quick Start
The main validation prompt used during training was:
Generate a logo for a car racing game app
Validation settings
- CFG:
3.0
- CFG Rescale:
0.0
- Steps:
20
- Sampler:
FlowMatchEulerDiscreteScheduler
- Seed:
42
- Resolution:
512
- Skip-layer guidance:
Note: The validation settings are not necessarily the same as the training settings.
You can find some example images in the following gallery:
The text encoder was not trained. You may reuse the base model text encoder for inference.
⨠Features
- Standard PEFT LoRA: Derived from the
black-forest-labs/FLUX.1-dev
base model.
- Specific Prompt Design: Trained with a specific prompt for generating car racing game app logos.
- Flexible Inference: Allows for reuse of the base model text encoder during inference.
đĻ Installation
No specific installation steps are provided in the original document.
đģ Usage Examples
Basic Usage
import torch
from diffusers import DiffusionPipeline
model_id = 'black-forest-labs/FLUX.1-dev'
adapter_id = 'brianling16/logo-lora'
pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16)
pipeline.load_lora_weights(adapter_id)
prompt = "Generate a logo for a car racing game app"
from optimum.quanto import quantize, freeze, qint8
quantize(pipeline.transformer, weights=qint8)
freeze(pipeline.transformer)
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu')
image = pipeline(
prompt=prompt,
num_inference_steps=20,
generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(42),
width=512,
height=512,
guidance_scale=3.0,
).images[0]
image.save("output.png", format="PNG")
đ Documentation
Training settings
- Training epochs: 0
- Training steps: 2600
- Learning rate: 8e-05
- Learning rate schedule: polynomial
- Warmup steps: 10
- Max grad norm: 2.0
- Effective batch size: 1
- Micro-batch size: 1
- Gradient accumulation steps: 1
- Number of GPUs: 1
- Gradient checkpointing: True
- Prediction type: flow-matching (extra parameters=['shift=3', 'flux_guidance_mode=constant', 'flux_guidance_value=1.0', 'flow_matching_loss=compatible', 'flux_lora_target=all'])
- Optimizer: adamw_bf16
- Trainable parameter precision: Pure BF16
- Caption dropout probability: 5.0%
- LoRA Rank: 64
- LoRA Alpha: None
- LoRA Dropout: 0.1
- LoRA initialisation style: default
Datasets
app_data
- Repeats: 4
- Total number of images: 10001
- Total number of aspect buckets: 1
- Resolution: 0.262144 megapixels
- Cropped: False
- Crop style: None
- Crop aspect: None
- Used for regularisation data: No
đ§ Technical Details
The model is based on the black-forest-labs/FLUX.1-dev
base model and uses the PEFT LoRA technique. During training, specific validation settings and training settings were applied, and the text encoder was not trained. The prediction type is flow-matching with specific extra parameters.
đ License
The license is specified as other
.