ChuckMcSneed commited on
Commit
b1a15b8
·
verified ·
1 Parent(s): 127d266

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -39,6 +39,7 @@ What they show is quite interesting:
39
  - Base model self-merge(Dicephal-123B) increased creativity, but didn't add extra prompt compliance
40
  - All my attempts to extend the context of XWin and Llama by using [Yukang's](https://huggingface.co/Yukang) loras have led to drastic decrease in creativity and coherence of the models :(
41
  - Miqu is currently the best 32k model according to this benchmark
 
42
 
43
  # More tests?
44
  Feel free to suggest more models for testing by opening new discussion. Mention model name, size and why do you want to test it.
 
39
  - Base model self-merge(Dicephal-123B) increased creativity, but didn't add extra prompt compliance
40
  - All my attempts to extend the context of XWin and Llama by using [Yukang's](https://huggingface.co/Yukang) loras have led to drastic decrease in creativity and coherence of the models :(
41
  - Miqu is currently the best 32k model according to this benchmark
42
+ - Miqu-120b is the second model after ChatGPT that has 100% passed S-test!
43
 
44
  # More tests?
45
  Feel free to suggest more models for testing by opening new discussion. Mention model name, size and why do you want to test it.