ChuckMcSneed
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -38,6 +38,7 @@ What they show is quite interesting:
|
|
38 |
- Cheater meme model(una-cybertron) was somewhat creative, but braindead
|
39 |
- Base model self-merge(Dicephal-123B) increased creativity, but didn't add extra prompt compliance
|
40 |
- All my attempts to extend the context of XWin and Llama by using [Yukang's](https://huggingface.co/Yukang) loras have led to drastic decrease in creativity and coherence of the models :(
|
|
|
41 |
|
42 |
# More tests?
|
43 |
Feel free to suggest more models for testing by opening new discussion. Mention model name, size and why do you want to test it.
|
|
|
38 |
- Cheater meme model(una-cybertron) was somewhat creative, but braindead
|
39 |
- Base model self-merge(Dicephal-123B) increased creativity, but didn't add extra prompt compliance
|
40 |
- All my attempts to extend the context of XWin and Llama by using [Yukang's](https://huggingface.co/Yukang) loras have led to drastic decrease in creativity and coherence of the models :(
|
41 |
+
- Miqu is currently the best 32k model according to this benchmark
|
42 |
|
43 |
# More tests?
|
44 |
Feel free to suggest more models for testing by opening new discussion. Mention model name, size and why do you want to test it.
|