Text-to-Image
Diffusers
flux
flux-diffusers
simpletuner
Not-For-All-Audiences
lora
template:sd-lora
lycoris
license: other | |
base_model: "terminusresearch/FluxBooru-v0.3" | |
tags: | |
- flux | |
- flux-diffusers | |
- text-to-image | |
- diffusers | |
- simpletuner | |
- not-for-all-audiences | |
- lora | |
- template:sd-lora | |
- lycoris | |
inference: true | |
widget: | |
- text: 'unconditional (blank prompt)' | |
parameters: | |
negative_prompt: 'blurry, cropped, ugly' | |
output: | |
url: ./assets/image_0_0.png | |
- text: 'a figure standing on a rocky terrain, holding a long object, possibly a spear or staff, raised high above their head. The figure is clad in what appears to be heavy, textured clothing or armor. The background features a light, cloudy sky, with the landscape suggesting a barren, mountainous region. The figure''s stance suggests a moment of triumph or challenge.' | |
parameters: | |
negative_prompt: 'blurry, cropped, ugly' | |
output: | |
url: ./assets/image_1_0.png | |
# lora-training | |
This is a LyCORIS adapter derived from [terminusresearch/FluxBooru-v0.3](https://huggingface.co/terminusresearch/FluxBooru-v0.3). | |
The main validation prompt used during training was: | |
``` | |
a figure standing on a rocky terrain, holding a long object, possibly a spear or staff, raised high above their head. The figure is clad in what appears to be heavy, textured clothing or armor. The background features a light, cloudy sky, with the landscape suggesting a barren, mountainous region. The figure's stance suggests a moment of triumph or challenge. | |
``` | |
## Validation settings | |
- CFG: `3.0` | |
- CFG Rescale: `0.0` | |
- Steps: `20` | |
- Sampler: `FlowMatchEulerDiscreteScheduler` | |
- Seed: `42` | |
- Resolution: `1024x1024` | |
- Skip-layer guidance: | |
Note: The validation settings are not necessarily the same as the [training settings](#training-settings). | |
You can find some example images in the following gallery: | |
<Gallery /> | |
The text encoder **was not** trained. | |
You may reuse the base model text encoder for inference. | |
## Training settings | |
- Training epochs: 15 | |
- Training steps: 1000 | |
- Learning rate: 0.001 | |
- Learning rate schedule: polynomial | |
- Warmup steps: 100 | |
- Max grad norm: 0.01 | |
- Effective batch size: 2 | |
- Micro-batch size: 2 | |
- Gradient accumulation steps: 1 | |
- Number of GPUs: 1 | |
- Gradient checkpointing: True | |
- Prediction type: flow-matching (extra parameters=['shift=3', 'flux_guidance_mode=constant', 'flux_guidance_value=3.5', 'flow_matching_loss=compatible']) | |
- Optimizer: adamw_bf16 | |
- Trainable parameter precision: Pure BF16 | |
- Caption dropout probability: 0.0% | |
### LyCORIS Config: | |
```json | |
{ | |
"algo": "lokr", | |
"bypass_mode": true, | |
"multiplier": 1.0, | |
"full_matrix": true, | |
"linear_dim": 10000, | |
"linear_alpha": 1, | |
"factor": 12, | |
"apply_preset": { | |
"target_module": [ | |
"Attention", | |
"FeedForward" | |
], | |
"module_algo_map": { | |
"Attention": { | |
"factor": 12 | |
}, | |
"FeedForward": { | |
"factor": 6 | |
} | |
} | |
} | |
} | |
``` | |
## Datasets | |
### dy_banzhangcaogao_ST-1024 | |
- Repeats: 0 | |
- Total number of images: 43 | |
- Total number of aspect buckets: 1 | |
- Resolution: 1.048576 megapixels | |
- Cropped: False | |
- Crop style: None | |
- Crop aspect: None | |
- Used for regularisation data: No | |
### dy_banzhangcaogao_ST-768 | |
- Repeats: 0 | |
- Total number of images: 43 | |
- Total number of aspect buckets: 1 | |
- Resolution: 0.589824 megapixels | |
- Cropped: False | |
- Crop style: None | |
- Crop aspect: None | |
- Used for regularisation data: No | |
### dy_banzhangcaogao_ST-512 | |
- Repeats: 0 | |
- Total number of images: 43 | |
- Total number of aspect buckets: 1 | |
- Resolution: 0.262144 megapixels | |
- Cropped: False | |
- Crop style: None | |
- Crop aspect: None | |
- Used for regularisation data: No | |
## Inference | |
```python | |
import torch | |
from diffusers import DiffusionPipeline | |
from lycoris import create_lycoris_from_weights | |
def download_adapter(repo_id: str): | |
import os | |
from huggingface_hub import hf_hub_download | |
adapter_filename = "pytorch_lora_weights.safetensors" | |
cache_dir = os.environ.get('HF_PATH', os.path.expanduser('~/.cache/huggingface/hub/models')) | |
cleaned_adapter_path = repo_id.replace("/", "_").replace("\\", "_").replace(":", "_") | |
path_to_adapter = os.path.join(cache_dir, cleaned_adapter_path) | |
path_to_adapter_file = os.path.join(path_to_adapter, adapter_filename) | |
os.makedirs(path_to_adapter, exist_ok=True) | |
hf_hub_download( | |
repo_id=repo_id, filename=adapter_filename, local_dir=path_to_adapter | |
) | |
return path_to_adapter_file | |
model_id = 'terminusresearch/FluxBooru-v0.3' | |
adapter_repo_id = 'uxoah/lora-training' | |
adapter_filename = 'pytorch_lora_weights.safetensors' | |
adapter_file_path = download_adapter(repo_id=adapter_repo_id) | |
pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) # loading directly in bf16 | |
lora_scale = 1.0 | |
wrapper, _ = create_lycoris_from_weights(lora_scale, adapter_file_path, pipeline.transformer) | |
wrapper.merge_to() | |
prompt = "a figure standing on a rocky terrain, holding a long object, possibly a spear or staff, raised high above their head. The figure is clad in what appears to be heavy, textured clothing or armor. The background features a light, cloudy sky, with the landscape suggesting a barren, mountainous region. The figure's stance suggests a moment of triumph or challenge." | |
## Optional: quantise the model to save on vram. | |
## Note: The model was quantised during training, and so it is recommended to do the same during inference time. | |
from optimum.quanto import quantize, freeze, qint8 | |
quantize(pipeline.transformer, weights=qint8) | |
freeze(pipeline.transformer) | |
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') # the pipeline is already in its target precision level | |
image = pipeline( | |
prompt=prompt, | |
num_inference_steps=20, | |
generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(42), | |
width=1024, | |
height=1024, | |
guidance_scale=3.0, | |
).images[0] | |
image.save("output.png", format="PNG") | |
``` | |
## Exponential Moving Average (EMA) | |
SimpleTuner generates a safetensors variant of the EMA weights and a pt file. | |
The safetensors file is intended to be used for inference, and the pt file is for continuing finetuning. | |
The EMA model may provide a more well-rounded result, but typically will feel undertrained compared to the full model as it is a running decayed average of the model weights. | |