Update README.md
Browse files
README.md
CHANGED
@@ -36,13 +36,57 @@ These are LoRA adaption weights for stabilityai/stable-diffusion-2-1. The weight
|
|
36 |
#### How to use
|
37 |
|
38 |
```python
|
39 |
-
#
|
40 |
-
|
|
|
|
|
|
|
41 |
|
42 |
-
|
|
|
|
|
|
|
43 |
|
44 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
45 |
|
46 |
## Training details
|
47 |
|
48 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
#### How to use
|
37 |
|
38 |
```python
|
39 |
+
# Importing LoRA Weights
|
40 |
+
from huggingface_hub import model_info
|
41 |
+
|
42 |
+
# LoRA weights ~3 MB
|
43 |
+
model_path = "vwu142/pokemon-lora"
|
44 |
|
45 |
+
# Getting Base Model
|
46 |
+
info = model_info(model_path)
|
47 |
+
model_base = info.cardData["base_model"]
|
48 |
+
print(model_base)
|
49 |
|
50 |
+
# Importing the Diffusion model with the weights added
|
51 |
+
import torch
|
52 |
+
from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler
|
53 |
+
|
54 |
+
pipe = StableDiffusionPipeline.from_pretrained(model_base, torch_dtype=torch.float16)
|
55 |
+
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
|
56 |
+
pipe.unet.load_attn_procs(model_path)
|
57 |
+
pipe.to("cuda")
|
58 |
+
```
|
59 |
|
60 |
## Training details
|
61 |
|
62 |
+
The weights were trained on the Free GPU provided in Google Collab.
|
63 |
+
|
64 |
+
The data it was trained on comes from this dataset:
|
65 |
+
https://huggingface.co/datasets/vwu142/Pokemon-Card-Plus-Pokemon-Actual-Image-And-Captions-13000
|
66 |
+
|
67 |
+
It has images of pokemon cards and pokemon with various descriptions of the image.
|
68 |
+
|
69 |
+
|
70 |
+
This was the parameters and the script used to train the weights
|
71 |
+
|
72 |
+
'''
|
73 |
+
!accelerate launch --mixed_precision="fp16" diffusers/examples/text_to_image/train_text_to_image_lora.py \
|
74 |
+
--pretrained_model_name_or_path=$MODEL_NAME \
|
75 |
+
--mixed_precision="fp16" \
|
76 |
+
--dataset_name=$DATASET_NAME --caption_column="caption"\
|
77 |
+
--dataloader_num_workers=8 \
|
78 |
+
--resolution=512 --center_crop --random_flip \
|
79 |
+
--train_batch_size=1 \
|
80 |
+
--gradient_accumulation_steps=4 \
|
81 |
+
--max_train_steps=1500 \
|
82 |
+
--learning_rate=1e-04 \
|
83 |
+
--max_grad_norm=1 \
|
84 |
+
--lr_scheduler="cosine" --lr_warmup_steps=0 \
|
85 |
+
--output_dir=${OUTPUT_DIR} \
|
86 |
+
--push_to_hub \
|
87 |
+
--hub_model_id=${HUB_MODEL_ID} \
|
88 |
+
--report_to=wandb \
|
89 |
+
--checkpointing_steps=500 \
|
90 |
+
--validation_prompt="Ludicolo" \
|
91 |
+
--seed=1337
|
92 |
+
'''
|