Update README.md
Browse files
README.md
CHANGED
@@ -11,12 +11,12 @@ model-index:
|
|
11 |
results: []
|
12 |
---
|
13 |
|
14 |
-
![
|
|
|
15 |
|
16 |
# 14B-Qwen2.5-Freya-v1
|
17 |
|
18 |
-
I decided to mess around with training methods, considering the re-emegence of
|
19 |
-
|
20 |
|
21 |
Freya-S1
|
22 |
- LoRA Trained on ~1.1GB of literature and raw text over Qwen 2.5's base model.
|
|
|
11 |
results: []
|
12 |
---
|
13 |
|
14 |
+
![Freya](https://huggingface.co/Sao10K/14B-Qwen2.5-Freya-x1/resolve/main/sad.png)
|
15 |
+
*Me during failed runs*
|
16 |
|
17 |
# 14B-Qwen2.5-Freya-v1
|
18 |
|
19 |
+
I decided to mess around with training methods again, considering the re-emegence of methods like multi-step training. Some people began doing it again, and so, why not? Inspired by AshhLimaRP's methology but done it my way.
|
|
|
20 |
|
21 |
Freya-S1
|
22 |
- LoRA Trained on ~1.1GB of literature and raw text over Qwen 2.5's base model.
|