flux-lora-training

This is a standard PEFT LoRA derived from black-forest-labs/FLUX.1-schnell.

The main validation prompt used during training was:

A happy pizza.

Validation settings

  • CFG: 0.0
  • CFG Rescale: 0.0
  • Steps: 15
  • Sampler: FlowMatchEulerDiscreteScheduler
  • Seed: 42
  • Resolution: 1024x1024
  • Skip-layer guidance:

Note: The validation settings are not necessarily the same as the training settings.

You can find some example images in the following gallery:

Prompt
unconditional (blank prompt)
Negative Prompt
'
Prompt
A happy pizza.
Negative Prompt
'

The text encoder was not trained. You may reuse the base model text encoder for inference.

Training settings

  • Training epochs: 2

  • Training steps: 50

  • Learning rate: 0.0001

    • Learning rate schedule: constant_with_warmup
    • Warmup steps: 100
  • Max grad norm: 1.0

  • Effective batch size: 16

    • Micro-batch size: 2
    • Gradient accumulation steps: 8
    • Number of GPUs: 1
  • Gradient checkpointing: True

  • Prediction type: flow-matching (extra parameters=['flux_fast_schedule', 'flux_schedule_auto_shift', 'shift=0.0', 'flux_guidance_mode=constant', 'flux_guidance_value=1.0', 'flow_matching_loss=compatible', 'flux_lora_target=all+ffs'])

  • Optimizer: adamw_bf16

  • Trainable parameter precision: Pure BF16

  • Caption dropout probability: 0.0%

  • LoRA Rank: 16

  • LoRA Alpha: None

  • LoRA Dropout: 0.1

  • LoRA initialisation style: default

Datasets

default_dataset

  • Repeats: 0
  • Total number of images: 90
  • Total number of aspect buckets: 1
  • Resolution: 1.048576 megapixels
  • Cropped: True
  • Crop style: center
  • Crop aspect: square
  • Used for regularisation data: No

default_dataset_512

  • Repeats: 0
  • Total number of images: 90
  • Total number of aspect buckets: 1
  • Resolution: 0.262144 megapixels
  • Cropped: True
  • Crop style: center
  • Crop aspect: square
  • Used for regularisation data: No

default_dataset_768

  • Repeats: 0
  • Total number of images: 90
  • Total number of aspect buckets: 1
  • Resolution: 0.589824 megapixels
  • Cropped: True
  • Crop style: center
  • Crop aspect: square
  • Used for regularisation data: No

Inference

import torch
from diffusers import DiffusionPipeline

model_id = 'black-forest-labs/FLUX.1-schnell'
adapter_id = 'manbeast3b/flux-lora-training'
pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) # loading directly in bf16
pipeline.load_lora_weights(adapter_id)

prompt = "A happy pizza."


## Optional: quantise the model to save on vram.
## Note: The model was quantised during training, and so it is recommended to do the same during inference time.
from optimum.quanto import quantize, freeze, qint8
quantize(pipeline.transformer, weights=qint8)
freeze(pipeline.transformer)
    
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') # the pipeline is already in its target precision level
image = pipeline(
    prompt=prompt,
    num_inference_steps=15,
    generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(42),
    width=1024,
    height=1024,
    guidance_scale=0.0,
).images[0]
image.save("output.png", format="PNG")
Downloads last month
2
Inference API
Examples

Model tree for manbeast3b/flux-lora-training

Adapter
(194)
this model