ChuckMcSneed
commited on
Commit
·
dca7c37
1
Parent(s):
999b7da
Update README.md
Browse files
README.md
CHANGED
@@ -30,24 +30,26 @@ PS=creative writing
|
|
30 |
Here they are. The results of each test. You can see pure data in file [LLM-test.csv](LLM-test.csv)
|
31 |
|
32 |
What they show is quite interesting:
|
33 |
-
- Goliath
|
34 |
-
- Goliath, Xwin and Mixtral are the best at creative writing
|
35 |
- Qwen is terrible at creative writing, but good at following commands, Mixtral is the opposite
|
36 |
-
-
|
37 |
-
- Xwin, Goliath and Mixtral are the best at stylized writing
|
38 |
-
- Goliath, Euryale, Xwin and Mixtral are the only ones who were capable to write coherent poems most of the time
|
39 |
- Una-xaberius shows that overtraining on benchmarks leads to loss of creativity and the model does not become smarter
|
40 |
- Solar-instruct, despite its small size still can write poems, but is incapable of writing in style
|
41 |
- ChatGPT can't pass B-test due to filter, C-test and P tests were not performed for that reason. It is incredibly good at stylized writing though, outperforming ALL tested local models. It can't pass D-test due to overfitting.
|
|
|
|
|
42 |
- Cybertron seems to perform at approximately the same level as Solar-instruct, it also was surprisingly okay at writing poems
|
43 |
- Neither Cybertron nor Solar-instruct outperform 70B models as they claim. Both are unable to follow advanced instructions(BCD tests).
|
|
|
|
|
44 |
|
45 |
# More tests?
|
46 |
Feel free to suggest more models for testing by opening new discussion. Mention model name, size and why do you want to test it.
|
47 |
|
48 |
# Updates
|
49 |
-
2023-12-
|
|
|
50 |
|
51 |
-
|
52 |
-
|
53 |
-
Added cybertron v3 per request of @fblgit.
|
|
|
30 |
Here they are. The results of each test. You can see pure data in file [LLM-test.csv](LLM-test.csv)
|
31 |
|
32 |
What they show is quite interesting:
|
33 |
+
- Goliath and Spicyboros are the best at following commands, followed by Qwen and Nous-Hermes
|
|
|
34 |
- Qwen is terrible at creative writing, but good at following commands, Mixtral is the opposite
|
35 |
+
- Xwin, Goliath, Spicyboros and Mixtral are the best at creative tasks
|
|
|
|
|
36 |
- Una-xaberius shows that overtraining on benchmarks leads to loss of creativity and the model does not become smarter
|
37 |
- Solar-instruct, despite its small size still can write poems, but is incapable of writing in style
|
38 |
- ChatGPT can't pass B-test due to filter, C-test and P tests were not performed for that reason. It is incredibly good at stylized writing though, outperforming ALL tested local models. It can't pass D-test due to overfitting.
|
39 |
+
- Spicyboros likely ingerited its ability of stylized writing from GPT
|
40 |
+
- Spicyboros is incredibly well finetuned on following instructions, and can even do stylized writing, outperforming Goliath on this benchmark. Not sure how it performs in practice
|
41 |
- Cybertron seems to perform at approximately the same level as Solar-instruct, it also was surprisingly okay at writing poems
|
42 |
- Neither Cybertron nor Solar-instruct outperform 70B models as they claim. Both are unable to follow advanced instructions(BCD tests).
|
43 |
+
- Chronos is the first 70B model that can't pass any of BCD tests ie is incapable of following complex instructions
|
44 |
+
- SynthIA is a disappointment
|
45 |
|
46 |
# More tests?
|
47 |
Feel free to suggest more models for testing by opening new discussion. Mention model name, size and why do you want to test it.
|
48 |
|
49 |
# Updates
|
50 |
+
### 2023-12-20
|
51 |
+
- Added Synthia, Spicyboros and Chronos
|
52 |
|
53 |
+
### 2023-12-19
|
54 |
+
- Added solar-instruct(suspicously high benchmarks) una-xaberius(known cheater) and ChatGPT. Some tests were not performed with ChatGPT because Sam will ban me for them.
|
55 |
+
- Added cybertron v3 per request of @fblgit.
|