Yulia Yakovleva's picture
2 10

Yulia Yakovleva

robolamp

AI & ML interests

None yet

Recent Activity

reacted to MonsterMMORPG's post with πŸ‘€ 28 days ago
How to Extract LoRA from FLUX Fine Tuning / DreamBooth Training Full Tutorial and Comparison Between Fine Tuning vs Extraction vs LoRA Training Full article is here public post : https://www.patreon.com/posts/112335162 This was short on length so check out the full article - public post Conclusions as below Conclusions With same training dataset (15 images used), same number of steps (all compared trainings are 150 epoch thus 2250 steps), almost same training duration, Fine Tuning / DreamBooth training of FLUX yields the very best results So yes Fine Tuning is the much better than LoRA training itself Amazing resemblance, quality with least amount of overfitting issue Moreover, extracting a LoRA from Fine Tuned full checkpoint, yields way better results from LoRA training itself Extracting LoRA from full trained checkpoints were yielding way better results in SD 1.5 and SDXL as well Comparison of these 3 is made in Image 5 (check very top of the images to see) 640 Network Dimension (Rank) FP16 LoRA takes 6.1 GB disk space You can also try 128 Network Dimension (Rank) FP16 and different LoRA strengths during inference to make it closer to Fine Tuned model Moreover, you can try Resize LoRA feature of Kohya GUI but hopefully it will be my another research and article later Image Raw Links Image 1 : https://huggingface.co/MonsterMMORPG/FLUX-Fine-Tuning-Grid-Tests/resolve/main/Image_1.png Image 2 : https://huggingface.co/MonsterMMORPG/FLUX-Fine-Tuning-Grid-Tests/resolve/main/Image_2.jfif Image 3 : https://huggingface.co/MonsterMMORPG/FLUX-Fine-Tuning-Grid-Tests/resolve/main/Image_3.jfif Image 4 : https://huggingface.co/MonsterMMORPG/FLUX-Fine-Tuning-Grid-Tests/resolve/main/Image_4.jfif Image 5 : https://huggingface.co/MonsterMMORPG/FLUX-Fine-Tuning-Grid-Tests/resolve/main/Image_5.jpg
liked a model 28 days ago
amd/SD2.1-Nitro
liked a model about 2 months ago
black-forest-labs/FLUX.1-dev
View all activity

Organizations

None yet

robolamp's activity

reacted to MonsterMMORPG's post with πŸ‘€ 28 days ago
view post
Post
1636
How to Extract LoRA from FLUX Fine Tuning / DreamBooth Training Full Tutorial and Comparison Between Fine Tuning vs Extraction vs LoRA Training

Full article is here public post : https://www.patreon.com/posts/112335162

This was short on length so check out the full article - public post

Conclusions as below

Conclusions
With same training dataset (15 images used), same number of steps (all compared trainings are 150 epoch thus 2250 steps), almost same training duration, Fine Tuning / DreamBooth training of FLUX yields the very best results

So yes Fine Tuning is the much better than LoRA training itself

Amazing resemblance, quality with least amount of overfitting issue

Moreover, extracting a LoRA from Fine Tuned full checkpoint, yields way better results from LoRA training itself

Extracting LoRA from full trained checkpoints were yielding way better results in SD 1.5 and SDXL as well

Comparison of these 3 is made in Image 5 (check very top of the images to see)

640 Network Dimension (Rank) FP16 LoRA takes 6.1 GB disk space

You can also try 128 Network Dimension (Rank) FP16 and different LoRA strengths during inference to make it closer to Fine Tuned model

Moreover, you can try Resize LoRA feature of Kohya GUI but hopefully it will be my another research and article later

Image Raw Links
Image 1 : MonsterMMORPG/FLUX-Fine-Tuning-Grid-Tests

Image 2 : MonsterMMORPG/FLUX-Fine-Tuning-Grid-Tests

Image 3 : MonsterMMORPG/FLUX-Fine-Tuning-Grid-Tests

Image 4 : MonsterMMORPG/FLUX-Fine-Tuning-Grid-Tests

Image 5 : MonsterMMORPG/FLUX-Fine-Tuning-Grid-Tests