File size: 2,731 Bytes
5bae91b
 
 
 
 
 
 
 
 
 
 
3d155cd
 
 
 
 
 
5bae91b
e1d3eb0
 
5bae91b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
eeb322d
 
 
 
 
5bae91b
eeb322d
 
 
 
5bae91b
eeb322d
 
 
 
 
 
 
 
 
5bae91b
 
 
eeb322d
 
 
 
 
 
 
 
 
 
f59d893
eeb322d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f59d893
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
---
base_model: stabilityai/stable-diffusion-2-1
library_name: diffusers
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
inference: true
datasets:
- vwu142/Pokemon-Card-Plus-Pokemon-Actual-Image-And-Captions-13000
---

<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->


# LoRA text2image fine-tuning - vwu142/pokemon-lora
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1. The weights were fine-tuned on the vwu142/Pokemon-Card-Plus-Pokemon-Actual-Image-And-Captions-13000 dataset. You can find some example images in the following. 

![img_0](./image_0.png)
![img_1](./image_1.png)
![img_2](./image_2.png)



## Intended uses & limitations

#### How to use

```python
# Importing LoRA Weights
from huggingface_hub import model_info

# LoRA weights ~3 MB
model_path = "vwu142/pokemon-lora"

# Getting Base Model
info = model_info(model_path)
model_base = info.cardData["base_model"]
print(model_base)

# Importing the Diffusion model with the weights added
import torch
from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler

pipe = StableDiffusionPipeline.from_pretrained(model_base, torch_dtype=torch.float16)
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe.unet.load_attn_procs(model_path)
pipe.to("cuda")
```

## Training details

The weights were trained on the Free GPU provided in Google Collab.

The data it was trained on comes from this dataset: 
https://huggingface.co/datasets/vwu142/Pokemon-Card-Plus-Pokemon-Actual-Image-And-Captions-13000

It has images of pokemon cards and pokemon with various descriptions of the image.


This was the parameters and the script used to train the weights

```python
!accelerate launch --mixed_precision="fp16"  diffusers/examples/text_to_image/train_text_to_image_lora.py \
  --pretrained_model_name_or_path=$MODEL_NAME \
  --mixed_precision="fp16" \
  --dataset_name=$DATASET_NAME --caption_column="caption"\
  --dataloader_num_workers=8 \
  --resolution=512 --center_crop --random_flip \
  --train_batch_size=1 \
  --gradient_accumulation_steps=4 \
  --max_train_steps=1500 \
  --learning_rate=1e-04 \
  --max_grad_norm=1 \
  --lr_scheduler="cosine" --lr_warmup_steps=0 \
  --output_dir=${OUTPUT_DIR} \
  --push_to_hub \
  --hub_model_id=${HUB_MODEL_ID} \
  --report_to=wandb \
  --checkpointing_steps=500 \
  --validation_prompt="Ludicolo" \
  --seed=1337
```