johnbrennan commited on
Commit
b0c6a5a
·
verified ·
1 Parent(s): 9b0e7d4

Model card auto-generated by SimpleTuner

Browse files
Files changed (1) hide show
  1. README.md +246 -0
README.md ADDED
@@ -0,0 +1,246 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ base_model: "black-forest-labs/FLUX.1-dev"
4
+ tags:
5
+ - flux
6
+ - flux-diffusers
7
+ - text-to-image
8
+ - diffusers
9
+ - simpletuner
10
+ - not-for-all-audiences
11
+ - lora
12
+ - template:sd-lora
13
+ - lycoris
14
+ inference: true
15
+ widget:
16
+ - text: 'unconditional (blank prompt)'
17
+ parameters:
18
+ negative_prompt: 'blurry, cropped, ugly'
19
+ output:
20
+ url: ./assets/image_0_0.png
21
+ - text: 'a hamster drumming in the style of a r1ch5ull1v4n caricature'
22
+ parameters:
23
+ negative_prompt: 'blurry, cropped, ugly'
24
+ output:
25
+ url: ./assets/image_1_0.png
26
+ - text: 'a man playing the saxophone in the style of a r1ch5ull1v4n caricature'
27
+ parameters:
28
+ negative_prompt: 'blurry, cropped, ugly'
29
+ output:
30
+ url: ./assets/image_2_0.png
31
+ - text: 'a woman holding a sign that says ''I LOVE PROMPTS!'' in the style of a r1ch5ull1v4n caricature'
32
+ parameters:
33
+ negative_prompt: 'blurry, cropped, ugly'
34
+ output:
35
+ url: ./assets/image_3_0.png
36
+ - text: 'a hipster with a beard, sitting on a chair in the style of a r1ch5ull1v4n caricature'
37
+ parameters:
38
+ negative_prompt: 'blurry, cropped, ugly'
39
+ output:
40
+ url: ./assets/image_4_0.png
41
+ - text: 'a female punk rocker guitarist in the style of a r1ch5ull1v4n caricature'
42
+ parameters:
43
+ negative_prompt: 'blurry, cropped, ugly'
44
+ output:
45
+ url: ./assets/image_5_0.png
46
+ - text: 'a concert event with a pop star on stage in the spotlight. There is a large crowd and flare from stadium lights in the style of a r1ch5ull1v4n caricature'
47
+ parameters:
48
+ negative_prompt: 'blurry, cropped, ugly'
49
+ output:
50
+ url: ./assets/image_6_0.png
51
+ - text: 'a man in the bayou playing a harmonica in the style of a r1ch5ull1v4n caricature'
52
+ parameters:
53
+ negative_prompt: 'blurry, cropped, ugly'
54
+ output:
55
+ url: ./assets/image_7_0.png
56
+ - text: 'LeBron James and Willie Nelson in the style of a r1ch5ull1v4n caricature'
57
+ parameters:
58
+ negative_prompt: 'blurry, cropped, ugly'
59
+ output:
60
+ url: ./assets/image_8_0.png
61
+ - text: 'a pig, in a post apocalyptic world, with a shotgun, in a leather jacket, in a desert, with a motorcycle'
62
+ parameters:
63
+ negative_prompt: 'blurry, cropped, ugly'
64
+ output:
65
+ url: ./assets/image_9_0.png
66
+ ---
67
+
68
+ # sullivan_ResumeBS6
69
+
70
+ This is a LyCORIS adapter derived from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev).
71
+
72
+
73
+ No validation prompt was used during training.
74
+
75
+ None
76
+
77
+
78
+
79
+ ## Validation settings
80
+ - CFG: `2.5`
81
+ - CFG Rescale: `0.0`
82
+ - Steps: `20`
83
+ - Sampler: `FlowMatchEulerDiscreteScheduler`
84
+ - Seed: `42`
85
+ - Resolution: `1024x1024`
86
+ - Skip-layer guidance:
87
+
88
+ Note: The validation settings are not necessarily the same as the [training settings](#training-settings).
89
+
90
+ You can find some example images in the following gallery:
91
+
92
+
93
+ <Gallery />
94
+
95
+ The text encoder **was not** trained.
96
+ You may reuse the base model text encoder for inference.
97
+
98
+
99
+ ## Training settings
100
+
101
+ - Training epochs: 0
102
+ - Training steps: 150
103
+ - Learning rate: 0.0004
104
+ - Learning rate schedule: polynomial
105
+ - Warmup steps: 100
106
+ - Max grad norm: 0.1
107
+ - Effective batch size: 6
108
+ - Micro-batch size: 3
109
+ - Gradient accumulation steps: 2
110
+ - Number of GPUs: 1
111
+ - Gradient checkpointing: True
112
+ - Prediction type: flow-matching (extra parameters=['shift=1.0', 'flux_guidance_mode=constant', 'flux_guidance_value=1.0', 'flow_matching_loss=compatible'])
113
+ - Optimizer: adamw_bf16
114
+ - Trainable parameter precision: Pure BF16
115
+ - Caption dropout probability: 10.0%
116
+
117
+
118
+ ### LyCORIS Config:
119
+ ```json
120
+ {
121
+ "algo": "lokr",
122
+ "multiplier": 1.0,
123
+ "linear_dim": 10000,
124
+ "linear_alpha": 1,
125
+ "factor": 16,
126
+ "apply_preset": {
127
+ "target_module": [
128
+ "Attention",
129
+ "FeedForward"
130
+ ],
131
+ "module_algo_map": {
132
+ "Attention": {
133
+ "factor": 16
134
+ },
135
+ "FeedForward": {
136
+ "factor": 8
137
+ }
138
+ }
139
+ }
140
+ }
141
+ ```
142
+
143
+ ## Datasets
144
+
145
+ ### sullivan-512
146
+ - Repeats: 23
147
+ - Total number of images: 25
148
+ - Total number of aspect buckets: 4
149
+ - Resolution: 0.262144 megapixels
150
+ - Cropped: False
151
+ - Crop style: None
152
+ - Crop aspect: None
153
+ - Used for regularisation data: No
154
+ ### sullivan-768
155
+ - Repeats: 23
156
+ - Total number of images: 25
157
+ - Total number of aspect buckets: 3
158
+ - Resolution: 0.589824 megapixels
159
+ - Cropped: False
160
+ - Crop style: None
161
+ - Crop aspect: None
162
+ - Used for regularisation data: No
163
+ ### sullivan-1024
164
+ - Repeats: 11
165
+ - Total number of images: 25
166
+ - Total number of aspect buckets: 7
167
+ - Resolution: 1.048576 megapixels
168
+ - Cropped: False
169
+ - Crop style: None
170
+ - Crop aspect: None
171
+ - Used for regularisation data: No
172
+ ### sullivan-1536
173
+ - Repeats: 5
174
+ - Total number of images: 25
175
+ - Total number of aspect buckets: 9
176
+ - Resolution: 2.359296 megapixels
177
+ - Cropped: False
178
+ - Crop style: None
179
+ - Crop aspect: None
180
+ - Used for regularisation data: No
181
+
182
+
183
+ ## Inference
184
+
185
+
186
+ ```python
187
+ import torch
188
+ from diffusers import DiffusionPipeline
189
+ from lycoris import create_lycoris_from_weights
190
+
191
+
192
+ def download_adapter(repo_id: str):
193
+ import os
194
+ from huggingface_hub import hf_hub_download
195
+ adapter_filename = "pytorch_lora_weights.safetensors"
196
+ cache_dir = os.environ.get('HF_PATH', os.path.expanduser('~/.cache/huggingface/hub/models'))
197
+ cleaned_adapter_path = repo_id.replace("/", "_").replace("\\", "_").replace(":", "_")
198
+ path_to_adapter = os.path.join(cache_dir, cleaned_adapter_path)
199
+ path_to_adapter_file = os.path.join(path_to_adapter, adapter_filename)
200
+ os.makedirs(path_to_adapter, exist_ok=True)
201
+ hf_hub_download(
202
+ repo_id=repo_id, filename=adapter_filename, local_dir=path_to_adapter
203
+ )
204
+
205
+ return path_to_adapter_file
206
+
207
+ model_id = 'black-forest-labs/FLUX.1-dev'
208
+ adapter_repo_id = 'johnbrennan/sullivan_ResumeBS6'
209
+ adapter_filename = 'pytorch_lora_weights.safetensors'
210
+ adapter_file_path = download_adapter(repo_id=adapter_repo_id)
211
+ pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) # loading directly in bf16
212
+ lora_scale = 1.0
213
+ wrapper, _ = create_lycoris_from_weights(lora_scale, adapter_file_path, pipeline.transformer)
214
+ wrapper.merge_to()
215
+
216
+ prompt = "An astronaut is riding a horse through the jungles of Thailand."
217
+
218
+
219
+ ## Optional: quantise the model to save on vram.
220
+ ## Note: The model was quantised during training, and so it is recommended to do the same during inference time.
221
+ from optimum.quanto import quantize, freeze, qint8
222
+ quantize(pipeline.transformer, weights=qint8)
223
+ freeze(pipeline.transformer)
224
+
225
+ pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') # the pipeline is already in its target precision level
226
+ image = pipeline(
227
+ prompt=prompt,
228
+ num_inference_steps=20,
229
+ generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(42),
230
+ width=1024,
231
+ height=1024,
232
+ guidance_scale=2.5,
233
+ ).images[0]
234
+ image.save("output.png", format="PNG")
235
+ ```
236
+
237
+
238
+
239
+ ## Exponential Moving Average (EMA)
240
+
241
+ SimpleTuner generates a safetensors variant of the EMA weights and a pt file.
242
+
243
+ The safetensors file is intended to be used for inference, and the pt file is for continuing finetuning.
244
+
245
+ The EMA model may provide a more well-rounded result, but typically will feel undertrained compared to the full model as it is a running decayed average of the model weights.
246
+