IDK-ab0ut commited on
Commit
79a551a
Β·
verified Β·
1 Parent(s): 31654dc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -32
README.md CHANGED
@@ -10,10 +10,7 @@ tags:
10
  This is a Diffusers-compatible version of [Yiffymix v51 by chilon249](https://civitai.com/models/3671?modelVersionId=658237).
11
  See the original page for more information.
12
 
13
- Keep in mind that this is [SDXL-Lightning](https://huggingface.co/ByteDance/SDXL-Lightning) checkpoint model,
14
- so using fewer steps (around 12 to 25) and low guidance
15
- scale (around 4 to 6) is recommended for the best result.
16
- It's also recommended to use clip skip of 2.
17
 
18
  This repository uses DPM++ 2M Karras as its sampler (Diffusers only).
19
 
@@ -50,8 +47,7 @@ Feel free to edit the image's configuration with your desire.
50
 
51
  You can see all available schedulers [here](https://huggingface.co/docs/diffusers/v0.11.0/en/api/schedulers/overview).
52
 
53
- To use scheduler other than DPM++ 2M Karras for this repository, make sure to import the
54
- corresponding pipeline for the scheduler you want to use. For example, we want to use Euler. First, import [EulerDiscreteScheduler](https://huggingface.co/docs/diffusers/v0.29.2/en/api/schedulers/euler#diffusers.EulerDiscreteScheduler) from Diffusers by adding this line of code.
55
  ```py
56
  from diffusers import StableDiffusionXLPipeline, EulerDiscreteScheduler
57
  ```
@@ -84,15 +80,9 @@ pipeline = StableDiffusionXLPipeline.from_pretrained(
84
  ).to("cuda")
85
  ```
86
  ## Variational Autoencoder (VAE) Installation πŸ–Ό
87
- There are two ways to get [Variational Autoencoder (VAE)](https://huggingface.co/learn/computer-vision-course/en/unit5/generative-models/variational_autoencoders) file into the model. The first one
88
- is to download the file manually and the second one is to remotely download the file using code. In this repository,
89
- I'll explain the method of using code as the efficient way. First step is to download the VAE file.
90
- You can download the file manually or remotely, but I recommend you to use the remote one. Usually, VAE
91
- files are in .safetensors format. There are two websites you can visit to download VAE. Those are HuggingFace
92
- and [CivitAI](civitai.com).
93
  ### From HuggingFace 😊
94
- This method is pretty straightforward. Pick any VAE's repository you like. Then, navigate to "Files" and
95
- the VAE's file. Make sure to click the file.
96
 
97
  Click the "Copy Download Link" for the file, you'll need this.
98
 
@@ -119,13 +109,11 @@ pipeline = StableDiffusionXLPipeline.from_pretrained(
119
  model, torch_dtype=torch.float16,
120
  vae=vae).to("cuda")
121
  ```
122
- For manual download, just fill the `link` variable or any string variables you use to
123
- load the VAE file with path directory of the .safetensors.
124
 
125
  ##### <b></small>Troubleshooting</b></small> πŸ”§
126
 
127
- In case if you're experiencing `HTTP404` error because
128
- the program can't resolve your link, here's a simple fix.
129
 
130
  First, download [huggingface_hub](https://huggingface.co/docs/huggingface_hub/en/index) using `pip`.
131
  ```py
@@ -146,18 +134,12 @@ vae = AutoencoderKL.from_single_file(
146
  # use 'torch_dtype=torch.float16' for FP16.
147
  # add 'subfolder="folder_name"' argument if the VAE is in specific folder.
148
  ```
149
- You can use [hf_hub_download()](https://huggingface.co/docs/huggingface_hub/en/guides/download) from [huggingface_hub](https://huggingface.co/docs/huggingface_hub/en/index)
150
- casually without the need to check if previous method returns `HTTP404` error.
151
  ### From CivitAI πŸ‡¨
152
- It's trickier if the VAE is in [CivitAI](civitai.com), because you can't use
153
- `from_single_file()` method. It only works for files inside HuggingFace and local files only. You can upload the VAE from there into
154
- HuggingFace, but you must comply with the model's license before continuing. To solve this issue, you may
155
- use `wget` or `curl` command to get the file from outside HuggingFace.
156
-
157
- Before downloading, to organize the VAE file you want to use and download, change
158
- the directory to save the downloaded model with `cd`.
159
- Use `-O` option before specifying the file's link and name. It's the same thing
160
- for both `wget` and `curl`.
161
  ```py
162
  # For 'wget'
163
  !cd <path>; wget -O [filename.safetensors] <link>
@@ -170,9 +152,7 @@ for both `wget` and `curl`.
170
  # Windows Shell, you don't need the exclamation mark (!).
171
  ```
172
 
173
- Since the file is now in your local directory, you can
174
- finally use `from_single_file()` method normally. Make sure to
175
- input the correct path for your VAE file. Load the VAE file into [AutoencoderKL](https://huggingface.co/docs/diffusers/en/api/models/autoencoderkl).
176
  ```py
177
  path = "path to VAE" # Ends with .safetensors file format.
178
  model = "IDK-ab0ut/Yiffymix_v51"
 
10
  This is a Diffusers-compatible version of [Yiffymix v51 by chilon249](https://civitai.com/models/3671?modelVersionId=658237).
11
  See the original page for more information.
12
 
13
+ Keep in mind that this is [SDXL-Lightning](https://huggingface.co/ByteDance/SDXL-Lightning) checkpoint model, so using fewer steps (around 12 to 25) and low guidance scale (around 4 to 6) is recommended for the best result. It's also recommended to use clip skip of 2.
 
 
 
14
 
15
  This repository uses DPM++ 2M Karras as its sampler (Diffusers only).
16
 
 
47
 
48
  You can see all available schedulers [here](https://huggingface.co/docs/diffusers/v0.11.0/en/api/schedulers/overview).
49
 
50
+ To use scheduler other than DPM++ 2M Karras for this repository, make sure to import the corresponding pipeline for the scheduler you want to use. For example, we want to use Euler. First, import [EulerDiscreteScheduler](https://huggingface.co/docs/diffusers/v0.29.2/en/api/schedulers/euler#diffusers.EulerDiscreteScheduler) from Diffusers by adding this line of code.
 
51
  ```py
52
  from diffusers import StableDiffusionXLPipeline, EulerDiscreteScheduler
53
  ```
 
80
  ).to("cuda")
81
  ```
82
  ## Variational Autoencoder (VAE) Installation πŸ–Ό
83
+ There are two ways to get [Variational Autoencoder (VAE)](https://huggingface.co/learn/computer-vision-course/en/unit5/generative-models/variational_autoencoders) file into the model. The first one is to download the file manually and the second one is to remotely download the file using code. In this repository, I'll explain the method of using code as the efficient way. First step is to download the VAE file. You can download the file manually or remotely, but I recommend you to use the remote one. Usually, VAE files are in .safetensors format. There are two websites you can visit to download VAE. Those are HuggingFace and [CivitAI](civitai.com).
 
 
 
 
 
84
  ### From HuggingFace 😊
85
+ This method is pretty straightforward. Pick any VAE's repository you like. Then, navigate to "Files" and the VAE's file. Make sure to click the file.
 
86
 
87
  Click the "Copy Download Link" for the file, you'll need this.
88
 
 
109
  model, torch_dtype=torch.float16,
110
  vae=vae).to("cuda")
111
  ```
112
+ For manual download, just fill the `link` variable or any string variables you use to load the VAE file with path directory of the .safetensors.
 
113
 
114
  ##### <b></small>Troubleshooting</b></small> πŸ”§
115
 
116
+ In case if you're experiencing `HTTP404` error because the program can't resolve your link, here's a simple fix.
 
117
 
118
  First, download [huggingface_hub](https://huggingface.co/docs/huggingface_hub/en/index) using `pip`.
119
  ```py
 
134
  # use 'torch_dtype=torch.float16' for FP16.
135
  # add 'subfolder="folder_name"' argument if the VAE is in specific folder.
136
  ```
137
+ You can use [hf_hub_download()](https://huggingface.co/docs/huggingface_hub/en/guides/download) from [huggingface_hub](https://huggingface.co/docs/huggingface_hub/en/index) casually without the need to check if previous method returns `HTTP404` error.
 
138
  ### From CivitAI πŸ‡¨
139
+ It's trickier if the VAE is in [CivitAI](civitai.com), because you can't use `from_single_file()` method. It only works for files inside HuggingFace and local files only. You can upload the VAE from there into HuggingFace, but you must comply with the model's license before continuing. To solve this issue, you may use `wget` or `curl` command to get the file from outside HuggingFace.
140
+
141
+ Before downloading, to organize the VAE file you want to use and download, change the directory to save the downloaded model with `cd`.
142
+ Use `-O` option before specifying the file's link and name. It's the same thing for both `wget` and `curl`.
 
 
 
 
 
143
  ```py
144
  # For 'wget'
145
  !cd <path>; wget -O [filename.safetensors] <link>
 
152
  # Windows Shell, you don't need the exclamation mark (!).
153
  ```
154
 
155
+ Since the file is now in your local directory, you can finally use `from_single_file()` method normally. Make sure to input the correct path for your VAE file. Load the VAE file into [AutoencoderKL](https://huggingface.co/docs/diffusers/en/api/models/autoencoderkl).
 
 
156
  ```py
157
  path = "path to VAE" # Ends with .safetensors file format.
158
  model = "IDK-ab0ut/Yiffymix_v51"