mjbuehler commited on
Commit
3174d53
·
verified ·
1 Parent(s): d567877

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +146 -3
README.md CHANGED
@@ -1,3 +1,146 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: black-forest-labs/FLUX.1-dev
3
+ library_name: diffusers
4
+ license: apache-2.0
5
+ tags:
6
+ - text-to-image
7
+ - diffusers-training
8
+ - diffusers
9
+ - lora
10
+ - FLUX.1-dev
11
+ - science
12
+ - materiomics
13
+ - bio-inspired
14
+ - materials science
15
+ - generative AI for science
16
+ instance_prompt: <leaf microstructure>
17
+ widget: []
18
+ ---
19
+
20
+ # FLUX.1 [dev] Fine-tuned with Leaf Images
21
+
22
+ FLUX.1 [dev] is a 12 billion parameter rectified flow transformer capable of generating images from text descriptions.
23
+
24
+ Install ```diffusers```
25
+
26
+ ```raw
27
+ pip install -U diffusers
28
+ ```
29
+
30
+ ## Model description
31
+
32
+ These are LoRA adaption weights for the FLUX.1 [dev] model (```black-forest-labs/FLUX.1-dev```). The base model is, and you must first get access to it before loading this LoRA adapter.
33
+
34
+ ## Trigger keywords
35
+
36
+ The following images were used during fine-tuning using the keyword \<leaf microstructure\>:
37
+
38
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/sI_exTnLy6AtOFDX1-7eq.png)
39
+
40
+ Full dataset used for training: (lamm-mit/leaf-flux-images-and-captions)
41
+
42
+ You should use \<leaf microstructure\> to trigger this feature during image generation.
43
+
44
+
45
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/#fileId=https://huggingface.co/lamm-mit/leaf-FLUX.1-dev/resolve/main/leaf-FLUX-inference-example.ipynb)
46
+
47
+
48
+ ## How to use
49
+
50
+ Defining some helper functions:
51
+
52
+ ```python
53
+ import os
54
+ from datetime import datetime
55
+ from PIL import Image
56
+
57
+ def generate_filename(base_name, extension=".png"):
58
+ timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
59
+ return f"{base_name}_{timestamp}{extension}"
60
+
61
+ def save_image(image, directory, base_name="image_grid"):
62
+ filename = generate_filename(base_name)
63
+ file_path = os.path.join(directory, filename)
64
+ image.save(file_path)
65
+ print(f"Image saved as {file_path}")
66
+
67
+ def image_grid(imgs, rows, cols, save=True, save_dir='generated_images', base_name="image_grid",
68
+ save_individual_files=False):
69
+
70
+ if not os.path.exists(save_dir):
71
+ os.makedirs(save_dir)
72
+
73
+ assert len(imgs) == rows * cols
74
+
75
+ w, h = imgs[0].size
76
+ grid = Image.new('RGB', size=(cols * w, rows * h))
77
+ grid_w, grid_h = grid.size
78
+
79
+ for i, img in enumerate(imgs):
80
+ grid.paste(img, box=(i % cols * w, i // cols * h))
81
+ if save_individual_files:
82
+ save_image(img, save_dir, base_name=base_name+f'_{i}-of-{len(imgs)}_')
83
+
84
+ if save and save_dir:
85
+ save_image(grid, save_dir, base_name)
86
+
87
+ return grid
88
+ ```
89
+
90
+ ### Text-to-image
91
+
92
+ Model loading:
93
+
94
+ ```python
95
+ from diffusers import FluxPipeline
96
+ import torch
97
+
98
+ repo_id = 'lamm-mit/leaf-L-FLUX.1-dev'
99
+
100
+ pipeline = FluxPipeline.from_pretrained(
101
+ "black-forest-labs/FLUX.1-dev",
102
+ torch_dtype=torch.bfloat16,
103
+ max_sequence_length=512,
104
+ )
105
+
106
+ #pipeline.enable_model_cpu_offload() #save some VRAM by offloading the model to CPU. Comment out if you have enough GPU VRAM
107
+
108
+ adapter='leaf-flux.safetensors' #Step 4000, final step
109
+ #adapter='leaf-flux-step-3000.safetensors' #Step 3000
110
+ #adapter='leaf-flux-step-3500.safetensors' #Step 3500
111
+
112
+ pipeline.load_lora_weights(repo_id, weight_name=adapter)
113
+
114
+ pipeline=pipeline.to('cuda')
115
+ )
116
+ pipeline=pipeline.to('cuda')
117
+ ```
118
+ Image generation - Example #1:
119
+
120
+ ```python
121
+ prompt="""A cube that looks like a <leaf microstructure>, with a wrap-around sign that says 'MATERIOMICS'.
122
+
123
+ The cube is placed in a stunning mountain landscape with snow.
124
+
125
+ The photo is taken with a Sony A1 camera, bokeh, during the golden hour.
126
+ """
127
+
128
+ num_samples =1
129
+ num_rows = 1
130
+ n_steps=25
131
+ guidance_scale=5.
132
+ all_images = []
133
+ for _ in range(num_rows):
134
+
135
+
136
+ image = pipeline(prompt,num_inference_steps=n_steps,num_images_per_prompt=num_samples,
137
+ guidance_scale=guidance_scale,
138
+ height=1024, width=1920,).images
139
+
140
+ all_images.extend(image)
141
+
142
+ grid = image_grid(all_images, num_rows, num_samples, save_individual_files=True, )
143
+ grid
144
+ ```
145
+
146
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/Cwb4D8puqAL32ywRXGQCn.png)