Commit
·
b11a8af
1
Parent(s):
f4e512d
Update README.md
Browse files
README.md
CHANGED
@@ -6,25 +6,18 @@ tags:
|
|
6 |
---
|
7 |
**Double Exposure Diffusion**
|
8 |
|
9 |
-
This is the Double Exposure Diffusion model, trained on personally generated images off of MidJourney version 4.
|
10 |
You trigger double exposure style images using token: **_dbl_ex_**.
|
11 |
|
12 |
-
**
|
13 |
-
![
|
14 |
-
**Animal Characters rendered with the model:**
|
15 |
-
![Animal Samples](https://huggingface.co/nitrosocke/mo-di-diffusion/resolve/main/modi-samples-02s.jpg)
|
16 |
-
**Cars and Landscapes rendered with the model:**
|
17 |
-
![Misc. Samples](https://huggingface.co/nitrosocke/mo-di-diffusion/resolve/main/modi-samples-03s.jpg)
|
18 |
|
19 |
-
#### Prompt and settings
|
20 |
-
**
|
21 |
-
|
|
|
22 |
|
23 |
-
|
24 |
-
**modern disney (baby lion) Negative prompt: person human**
|
25 |
-
_Steps: 50, Sampler: Euler a, CFG scale: 7, Seed: 1355059992, Size: 512x512_
|
26 |
-
|
27 |
-
This model was trained using the diffusers based dreambooth training by ShivamShrirao using prior-preservation loss and the _train-text-encoder_ flag in 9.000 steps.
|
28 |
|
29 |
## License
|
30 |
|
|
|
6 |
---
|
7 |
**Double Exposure Diffusion**
|
8 |
|
9 |
+
This is version 1 of the Double Exposure Diffusion model, trained on personally generated images off of MidJourney version 4.
|
10 |
You trigger double exposure style images using token: **_dbl_ex_**.
|
11 |
|
12 |
+
**Example:**
|
13 |
+
![Sample 1](https://huggingface.co/joachimsallstrom/double-exposure-style/resolve/main/dbl_ex_sample_01.PNG)
|
|
|
|
|
|
|
|
|
14 |
|
15 |
+
#### Prompt and settings:
|
16 |
+
**dbl_ex, white background**<br>
|
17 |
+
**Negative prompt: contrast**<br>
|
18 |
+
_Steps: 30, Sampler: Euler a, CFG scale: 5, Seed: 3908678340, Size: 512x512_
|
19 |
|
20 |
+
This model was trained using TheLastBen’s fast DreamBooth model @ 1500 steps.
|
|
|
|
|
|
|
|
|
21 |
|
22 |
## License
|
23 |
|