Kquant03 commited on
Commit
3fc3b5c
·
1 Parent(s): 64264a5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -2,8 +2,8 @@
2
 
3
  # Try to get it to answer your questions, if you even can...
4
 
5
- A frankenMoE of [TinyLlama-1.1B-1T-OpenOrca](https://huggingface.co/jeff31415/TinyLlama-1.1B-1T-OpenOrca)
6
- [TinyLlama-1.1B-intermediate-step-1195k-token-2.5T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T)
7
- and [tiny-llama-1.1b-chat-medical](https://huggingface.co/SumayyaAli/tiny-llama-1.1b-chat-medical)
8
 
9
  # Most 1.1B models are decoherent and can't even answer simple questions. I found the models that don't fail in this regard, then mashed 32 copies of those 3 models together into a 32x MoE
 
2
 
3
  # Try to get it to answer your questions, if you even can...
4
 
5
+ A frankenMoE of [TinyLlama-1.1B-1T-OpenOrca](https://huggingface.co/jeff31415/TinyLlama-1.1B-1T-OpenOrca),
6
+ [TinyLlama-1.1B-intermediate-step-1195k-token-2.5T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T),
7
+ and [tiny-llama-1.1b-chat-medical](https://huggingface.co/SumayyaAli/tiny-llama-1.1b-chat-medical),
8
 
9
  # Most 1.1B models are decoherent and can't even answer simple questions. I found the models that don't fail in this regard, then mashed 32 copies of those 3 models together into a 32x MoE