Jyrrata commited on
Commit
51940bf
·
verified ·
1 Parent(s): 29b0eb3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -21,6 +21,7 @@ The usual way of adding characters to an already trained model is LoRAs or simil
21
  2. Full finetunes usually result in the model being very capable of abstracting, meaning even if a specific combination of concepts isn't present in the training data, a well trained finetune will be able to combine these concepts in an almost logical way.
22
  3. As a specific example of 2.: while style and concept Loras usually work quite well when used together with other Loras of their type, character Loras tend to be much less capable of being used together with other character Loras for generating images of multiple new characters. While there are ways to make this work, and some Loras are more capable than others, this also limits the results in my experience. One workaround to this is training a Lora on multiple characters as well as images of these characters together, which does work quite well in my experience, however finding artwork of specific characters together is not always possible and when adding more and more characters, the one must also increased the size of the resulting Lora in order to learn more and more characters. This make Loras less and less effective the more characters one wants to add at once.
23
  4. It is possible to extract a Lora from a model, meaning the things the finetune learned can be applied to models that have a similar base. This means that even if the resulting finetune doesn't work for every application, the extract can profit from most of the advantages of a Lora.
 
24
  These factors make a fullscale finetune the best option for adding large amounts of characters to a model.
25
  ## The How
26
  Thanks to optimizations to model training and technologies like Gradient Accumulation, it is possible to effectively finetune models even on normal consumer hardware like a single RTX 3090, as long as one has enough time to complete the training. The reduced size due to not needing to train an entire base model, this becomes even more feasible.
 
21
  2. Full finetunes usually result in the model being very capable of abstracting, meaning even if a specific combination of concepts isn't present in the training data, a well trained finetune will be able to combine these concepts in an almost logical way.
22
  3. As a specific example of 2.: while style and concept Loras usually work quite well when used together with other Loras of their type, character Loras tend to be much less capable of being used together with other character Loras for generating images of multiple new characters. While there are ways to make this work, and some Loras are more capable than others, this also limits the results in my experience. One workaround to this is training a Lora on multiple characters as well as images of these characters together, which does work quite well in my experience, however finding artwork of specific characters together is not always possible and when adding more and more characters, the one must also increased the size of the resulting Lora in order to learn more and more characters. This make Loras less and less effective the more characters one wants to add at once.
23
  4. It is possible to extract a Lora from a model, meaning the things the finetune learned can be applied to models that have a similar base. This means that even if the resulting finetune doesn't work for every application, the extract can profit from most of the advantages of a Lora.
24
+
25
  These factors make a fullscale finetune the best option for adding large amounts of characters to a model.
26
  ## The How
27
  Thanks to optimizations to model training and technologies like Gradient Accumulation, it is possible to effectively finetune models even on normal consumer hardware like a single RTX 3090, as long as one has enough time to complete the training. The reduced size due to not needing to train an entire base model, this becomes even more feasible.