license: llama2
tags:
- merge
- mergekit
- nsfw
- not-for-all-audiences
language:
- en
- ru
This is like Gembo v1, but with 6-7% more human data. Does perform a bit worse on the benches(who cares), but should be able to write in more diverse styles(see waxwing-styles.txt). Mainly made for RP, but should be okay as an assistant. Turned out quite good, considering the amount of LORAs I merged into it.
Observations
- GPTisms and repetition: put temperature and rep. pen. higher, make GPTisms stop sequences
- A bit different than the ususal stuff; I'd say that it has so much slop in it that it unslops itself
- Lightly censored
- Fairly neutral, can be violent if you ask it really good, Goliath is a bit better at it
- Has a bit of optimism baked in, but it's not very severe
- Doesn't know when to stop, can be quite verbose or just stop almost immediately(maybe wants to use LimaRP settings idk)
- Sometimes can't handle '
- Second model that tried to be funny unprompted to me(First one was Goliath)
- Moderately intelligent
- Quite creative
Naming
Internal name of this model was euryale-guano-saiga-med-janboros-kim-wing-lima-wiz-tony-d30-s40, but I decided to keep it short, and since it was iteration G in my files, I called it "Gembo".
Prompt format
Alpaca. You can also try some other formats, I'm pretty sure it has a lot of them from all those merges.
### Instruction:
{instruction}
### Response:
Settings
As I already mentioned, high temperature and rep.pen. works great. For RP try something like this:
- temperature=5
- MinP=0.10
- rep.pen.=1.15
Adjust to match your needs.
How it was created
I took Sao10K/Euryale-1.3-L2-70B (Good base model) and added
- Mikael110/llama-2-70b-guanaco-qlora (Creativity+assistant)
- IlyaGusev/saiga2_70b_lora (Creativity+assistant)
- s1ghhh/medllama-2-70b-qlora-1.1 (More data)
- v2ray/Airoboros-2.1-Jannie-70B-QLoRA (Creativity+assistant)
- Chat-Error/fiction.live-Kimiko-V2-70B (Creativity)
- alac/Waxwing-Storytelling-70B-LoRA (New, creativity)
- Doctor-Shotgun/limarpv3-llama2-70b-qlora (Creativity)
- v2ray/LLaMA-2-Wizard-70B-QLoRA (Creativity+assistant)
- v2ray/TonyGPT-70B-QLoRA (Special spice)
Then I SLERP-merged it with cognitivecomputations/dolphin-2.2-70b (Needed to bridge the gap between this wonderful mess and Smaxxxer, otherwise it's quality is low) with 0.3t and then SLERP-merged it again with ChuckMcSneed/SMaxxxer-v1-70b (Creativity) with 0.4t. For SLERP-merges I used https://github.com/arcee-ai/mergekit.
Benchmarks (Do they even mean anything anymore?)
NeoEvalPlusN_benchmark
Test name | Gembo | Gembo 1.1 |
---|---|---|
B | 2.5 | 2.5 |
C | 1.5 | 1.5 |
D | 3 | 3 |
S | 7.5 | 6.75 |
P | 5.25 | 5.25 |
Total | 19.75 | 19 |