license: other
base_model: black-forest-labs/FLUX.1-dev
tags:
- flux
- flux-diffusers
- text-to-image
- diffusers
- simpletuner
- safe-for-work
- lora
- template:sd-lora
- standard
inference: true
widget:
- text: unconditional (blank prompt)
parameters:
negative_prompt: blurry, cropped, ugly
output:
url: ./assets/image_0_0.png
- text: >-
A scene from One Piece. Monkey D. Luffy holding a sign that says 'I LOVE
PROMPTS!', he is standing full body on a beach at sunset. He is wearing
his iconic red vest, blue shorts, and straw hat. The setting sun casts a
dynamic shadow on his cheerful and carefree expression.
parameters:
negative_prompt: blurry, cropped, ugly
output:
url: ./assets/image_1_0.png
- text: >-
A scene from One Piece. Monkey D. Luffy jumping out of a propeller
airplane, sky diving. He looks thrilled, his straw hat tied to his neck is
flying in the wind, and his arms are stretched out wide as if ready to
grab something. The sky is clear and blue, with birds flying in the
distance.
parameters:
negative_prompt: blurry, cropped, ugly
output:
url: ./assets/image_2_0.png
- text: >-
A scene from One Piece. Monkey D. Luffy spinning a basketball on his
finger on a basketball court. He is wearing a Lakers jersey with the #12
on it, his straw hat sits loosely on his head. The basketball hoop and
crowd are in the background cheering him. He is grinning widely with
excitement.
parameters:
negative_prompt: blurry, cropped, ugly
output:
url: ./assets/image_3_0.png
- text: >-
A scene from One Piece. Monkey D. Luffy is wearing a suit in an office,
shaking the hand of a businesswoman. The woman has purple hair and is
wearing professional attire. There is a Google logo in the background. It
is during daytime, and the overall sentiment is one of accomplishment and
Luffy’s usual carefree charm.
parameters:
negative_prompt: blurry, cropped, ugly
output:
url: ./assets/image_4_0.png
- text: >-
A scene from One Piece. Monkey D. Luffy is fighting a large brown grizzly
bear, deep in a forest. The bear is tall and standing on two legs,
roaring. The bear is also wearing a crown because it is the king of all
bears. Around them are tall trees and other animals watching as Luffy
grins, stretching his arm back for a punch.
parameters:
negative_prompt: blurry, cropped, ugly
output:
url: ./assets/image_5_0.png
luffy-standard-lora-1
This is a standard PEFT LoRA derived from black-forest-labs/FLUX.1-dev.
No validation prompt was used during training.
None
Validation settings
- CFG:
3.5
- CFG Rescale:
0.0
- Steps:
20
- Sampler:
FlowMatchEulerDiscreteScheduler
- Seed:
42
- Resolution:
1024x1024
- Skip-layer guidance:
Note: The validation settings are not necessarily the same as the training settings.
You can find some example images in the following gallery:
The text encoder was not trained. You may reuse the base model text encoder for inference.
Training settings
Training epochs: 128
Training steps: 2700
Learning rate: 0.0001
- Learning rate schedule: constant
- Warmup steps: 100
Max grad norm: 2.0
Effective batch size: 48
- Micro-batch size: 48
- Gradient accumulation steps: 1
- Number of GPUs: 1
Gradient checkpointing: True
Prediction type: flow-matching (extra parameters=['shift=3', 'flux_guidance_mode=constant', 'flux_guidance_value=1.0', 'flow_matching_loss=compatible', 'flux_lora_target=all'])
Optimizer: adamw_bf16
Trainable parameter precision: Pure BF16
Caption dropout probability: 0.0%
LoRA Rank: 128
LoRA Alpha: None
LoRA Dropout: 0.1
LoRA initialisation style: default
Datasets
luffy-512
- Repeats: 2
- Total number of images: 306
- Total number of aspect buckets: 1
- Resolution: 0.262144 megapixels
- Cropped: False
- Crop style: None
- Crop aspect: None
- Used for regularisation data: No
Inference
import torch
from diffusers import DiffusionPipeline
model_id = 'black-forest-labs/FLUX.1-dev'
adapter_id = 'adipanda/luffy-standard-lora-1'
pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) # loading directly in bf16
pipeline.load_lora_weights(adapter_id)
prompt = "An astronaut is riding a horse through the jungles of Thailand."
## Optional: quantise the model to save on vram.
## Note: The model was quantised during training, and so it is recommended to do the same during inference time.
from optimum.quanto import quantize, freeze, qint8
quantize(pipeline.transformer, weights=qint8)
freeze(pipeline.transformer)
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') # the pipeline is already in its target precision level
image = pipeline(
prompt=prompt,
num_inference_steps=20,
generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(42),
width=1024,
height=1024,
guidance_scale=3.5,
).images[0]
image.save("output.png", format="PNG")