Update README.md
Browse files
README.md
CHANGED
@@ -56,7 +56,7 @@ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
|
56 |
```
|
57 |
|
58 |
# Notes:
|
59 |
-
- For small datasets with narrow content which the model has already done well on our domain, and doesn't want the model to forget the knowledge => Just need to focus on
|
60 |
- Fine-tuned lora with rank = 1 and alpha = 64, epoch = 1, linear (optim)
|
61 |
- DoRA
|
62 |
|
|
|
56 |
```
|
57 |
|
58 |
# Notes:
|
59 |
+
- For small datasets with narrow content which the model has already done well on our domain, and doesn't want the model to forget the knowledge => Just need to focus on o.
|
60 |
- Fine-tuned lora with rank = 1 and alpha = 64, epoch = 1, linear (optim)
|
61 |
- DoRA
|
62 |
|