File size: 3,807 Bytes
51c4efd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
022fd3d
51c4efd
 
 
022fd3d
51c4efd
022fd3d
51c4efd
022fd3d
51c4efd
022fd3d
51c4efd
 
 
022fd3d
51c4efd
022fd3d
51c4efd
022fd3d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51c4efd
022fd3d
51c4efd
022fd3d
51c4efd
 
022fd3d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51c4efd
 
022fd3d
51c4efd
022fd3d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51c4efd
d0921c0
 
 
51c4efd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
instance_prompt: <leaf microstructure>
widget: []
---

# SDXL Fine-tuned with Leaf Images

## Model description

These are LoRA adaption weights for the SDXL-base-1.0 model.

## Trigger keywords

The following image were used during fine-tuning using the keyword \<leaf microstructure\>:

![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/sI_exTnLy6AtOFDX1-7eq.png)

You should use <leaf microstructure> to trigger the image generation.

## How to use

Defining some helper functions:

```python
from diffusers import DiffusionPipeline
import torch
import os
from datetime import datetime
from PIL import Image

def generate_filename(base_name, extension=".png"):
    timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
    return f"{base_name}_{timestamp}{extension}"

def save_image(image, directory, base_name="image_grid"):
    
    filename = generate_filename(base_name)
    file_path = os.path.join(directory, filename)
    image.save(file_path)
    print(f"Image saved as {file_path}")

def image_grid(imgs, rows, cols, save=True, save_dir='generated_images', base_name="image_grid",
              save_individual_files=False):
    
    if not os.path.exists(save_dir):
        os.makedirs(save_dir)
        
    assert len(imgs) == rows * cols

    w, h = imgs[0].size
    grid = Image.new('RGB', size=(cols * w, rows * h))
    grid_w, grid_h = grid.size

    for i, img in enumerate(imgs):
        grid.paste(img, box=(i % cols * w, i // cols * h))
        if save_individual_files:
            save_image(img, save_dir, base_name=base_name+f'_{i}-of-{len(imgs)}_')
            
    if save and save_dir:
        save_image(grid, save_dir, base_name)
    
    return grid
```

### Text-to-image

Model loading:

```python

import torch
from diffusers import DiffusionPipeline, AutoencoderKL

repo_id='lamm-mit/SDXL-leaf-inspired'

vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
base = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    vae=vae,
    torch_dtype=torch.float16,
    variant="fp16",
    use_safetensors=True
)
base.load_lora_weights(repo_id)
_ = base.to("cuda")

refiner = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-refiner-1.0",
    text_encoder_2=base.text_encoder_2,
    vae=base.vae,
    torch_dtype=torch.float16,
    use_safetensors=True,
    variant="fp16",
)
refiner.to("cuda")
```

Image generation:

```python
prompt = "a vase that resembles a <leaf microstructure>, high quality" 

num_samples    = 4
num_rows       = 4
guidance_scale = 15

all_images = []

for _ in range(num_rows):
    # Define how many steps and what % of steps to be run on each experts (80/20)
    n_steps = 25
    high_noise_frac = 0.8

    # run both experts
    image = base(
        prompt=prompt,
        num_inference_steps=n_steps, guidance_scale=guidance_scale,
        denoising_end=high_noise_frac,num_images_per_prompt=num_samples,
        output_type="latent",
    ).images
    image = refiner(
        prompt=prompt,
        num_inference_steps=n_steps, guidance_scale=guidance_scale,
        denoising_start=high_noise_frac,num_images_per_prompt=num_samples,
        image=image,
    ).images
    
    all_images.extend(image)

grid = image_grid(all_images, num_rows, num_samples,
                  save_individual_files=True,
                 )
grid
```


![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/R7sr9kAwZjRk_80oMY54h.png)