xxx777xxxASD's picture
Update README.md
4379619 verified
|
raw
history blame
1.89 kB
---
license: apache-2.0
language:
- en
tags:
- merge
---
![image/png](https://i.ibb.co/MRXkh6p/icon2.png)
Test merge. Attempt to get good at RP, ERP, general things model with 128k context. Every model here has [Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context](https://huggingface.co/Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context) in merge instead of regular MistralYarn 128k. The reason is because i belive Epiculous merged it with Mistral Instruct v0.2 to make first 32k context experience as perfect as possible until we reach YaRN from 32 to 128k, if not - it's sad D:, or, i get something wrong.
[Exl2, 4.0 bpw](https://huggingface.co/xxx777xxxASD/NeuralKunoichi-EroSumika-4x7B-128k-exl2-bpw-4.0)
[GGUF](https://huggingface.co/xxx777xxxASD/NeuralKunoichi-EroSumika-4x7B-128k-GGUF)
Here is the "family tree" of this model, im not writing full model names cause they long af
### NeuralKunoichi-EroSumika 4x7B 128k
```
* NeuralKunoichi-EroSumika 4x7B
*(1) Kunocchini-7b-128k
|
*(2) Mistral-Instruct-v0.2-128k
* Mistral-7B-Instruct-v0.2
|
* Fett-128k
|
*(3) Erosumika-128k
* Erosumika 7B
|
* FFett-128k
|
*(4) Mistral-NeuralHuman-128k
* Fett-128k
|
* Mistral-NeuralHuman
* Mistral_MoreHuman
|
* Mistral-Neural-Story
```
## Models used
- [localfultonextractor/Erosumika-7B](https://huggingface.co/localfultonextractor/Erosumika-7B)
- [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
- [Test157t/Kunocchini-7b-128k-test](https://huggingface.co/Test157t/Kunocchini-7b-128k-test)
- [NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story](https://huggingface.co/NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story7b)
- [valine/MoreHuman](https://huggingface.co/valine/MoreHuman)
- [Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context](https://huggingface.co/Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context)