xxx777xxxASD commited on
Commit
4379619
·
verified ·
1 Parent(s): 6651b50

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -10,7 +10,7 @@ tags:
10
 
11
  Test merge. Attempt to get good at RP, ERP, general things model with 128k context. Every model here has [Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context](https://huggingface.co/Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context) in merge instead of regular MistralYarn 128k. The reason is because i belive Epiculous merged it with Mistral Instruct v0.2 to make first 32k context experience as perfect as possible until we reach YaRN from 32 to 128k, if not - it's sad D:, or, i get something wrong.
12
 
13
- [EXL2 4.0](https://huggingface.co/xxx777xxxASD/NeuralKunoichi-EroSumika-4x7B-128k-exl2-bpw-4.0)
14
 
15
  [GGUF](https://huggingface.co/xxx777xxxASD/NeuralKunoichi-EroSumika-4x7B-128k-GGUF)
16
 
 
10
 
11
  Test merge. Attempt to get good at RP, ERP, general things model with 128k context. Every model here has [Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context](https://huggingface.co/Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context) in merge instead of regular MistralYarn 128k. The reason is because i belive Epiculous merged it with Mistral Instruct v0.2 to make first 32k context experience as perfect as possible until we reach YaRN from 32 to 128k, if not - it's sad D:, or, i get something wrong.
12
 
13
+ [Exl2, 4.0 bpw](https://huggingface.co/xxx777xxxASD/NeuralKunoichi-EroSumika-4x7B-128k-exl2-bpw-4.0)
14
 
15
  [GGUF](https://huggingface.co/xxx777xxxASD/NeuralKunoichi-EroSumika-4x7B-128k-GGUF)
16