ChuckMcSneed
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,73 @@
|
|
1 |
---
|
2 |
license: llama2
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: llama2
|
3 |
+
tags:
|
4 |
+
- merge
|
5 |
+
- mergekit
|
6 |
+
- nsfw
|
7 |
+
- not-for-all-audiences
|
8 |
+
language:
|
9 |
+
- en
|
10 |
+
- ru
|
11 |
---
|
12 |
+
![logo-gembo-1.1.png](logo-gembo.png)
|
13 |
+
This is like Gembo v1, but with 6-7% more human data. Does perform a bit worse on the benches(who cares), but should be able to write in more diverse styles(see [waxwing-styles.txt](waxwing-styles.txt)). Mainly made for RP, but should be okay as an assistant. Turned out quite good, considering the amount of LORAs I merged into it.
|
14 |
+
|
15 |
+
# Observations
|
16 |
+
- GPTisms and repetition: put temperature and rep. pen. higher, make GPTisms stop sequences
|
17 |
+
- A bit different than the ususal stuff; I'd say that it has so much slop in it that it unslops itself
|
18 |
+
- Lightly censored
|
19 |
+
- Fairly neutral, can be violent if you ask it really good, Goliath is a bit better at it
|
20 |
+
- Has a bit of optimism baked in, but it's not very severe
|
21 |
+
- Doesn't know when to stop, can be quite verbose or just stop almost immediately(maybe wants to use LimaRP settings idk)
|
22 |
+
- Sometimes can't handle '
|
23 |
+
- Second model that tried to be funny unprompted to me(First one was Goliath)
|
24 |
+
- Moderately intelligent
|
25 |
+
- Quite creative
|
26 |
+
|
27 |
+
# Naming
|
28 |
+
Internal name of this model was euryale-guano-saiga-med-janboros-kim-wing-lima-wiz-tony-d30-s40, but I decided to keep it short, and since it was iteration G in my files, I called it "Gembo".
|
29 |
+
|
30 |
+
# Prompt format
|
31 |
+
Alpaca. You can also try some other formats, I'm pretty sure it has a lot of them from all those merges.
|
32 |
+
```
|
33 |
+
### Instruction:
|
34 |
+
{instruction}
|
35 |
+
|
36 |
+
### Response:
|
37 |
+
```
|
38 |
+
|
39 |
+
# Settings
|
40 |
+
As I already mentioned, high temperature and rep.pen. works great.
|
41 |
+
For RP try something like this:
|
42 |
+
- temperature=5
|
43 |
+
- MinP=0.10
|
44 |
+
- rep.pen.=1.15
|
45 |
+
|
46 |
+
Adjust to match your needs.
|
47 |
+
|
48 |
+
|
49 |
+
# How it was created
|
50 |
+
I took Sao10K/Euryale-1.3-L2-70B (Good base model) and added
|
51 |
+
- Mikael110/llama-2-70b-guanaco-qlora (Creativity+assistant)
|
52 |
+
- IlyaGusev/saiga2_70b_lora (Creativity+assistant)
|
53 |
+
- s1ghhh/medllama-2-70b-qlora-1.1 (More data)
|
54 |
+
- v2ray/Airoboros-2.1-Jannie-70B-QLoRA (Creativity+assistant)
|
55 |
+
- Chat-Error/fiction.live-Kimiko-V2-70B (Creativity)
|
56 |
+
- alac/Waxwing-Storytelling-70B-LoRA (New, creativity)
|
57 |
+
- Doctor-Shotgun/limarpv3-llama2-70b-qlora (Creativity)
|
58 |
+
- v2ray/LLaMA-2-Wizard-70B-QLoRA (Creativity+assistant)
|
59 |
+
- v2ray/TonyGPT-70B-QLoRA (Special spice)
|
60 |
+
|
61 |
+
Then I SLERP-merged it with cognitivecomputations/dolphin-2.2-70b (Needed to bridge the gap between this wonderful mess and Smaxxxer, otherwise it's quality is low) with 0.3t and then SLERP-merged it again with ChuckMcSneed/SMaxxxer-v1-70b (Creativity) with 0.4t. For SLERP-merges I used https://github.com/arcee-ai/mergekit.
|
62 |
+
|
63 |
+
# Benchmarks (Do they even mean anything anymore?)
|
64 |
+
### NeoEvalPlusN_benchmark
|
65 |
+
[My meme benchmark.](https://huggingface.co/datasets/ChuckMcSneed/NeoEvalPlusN_benchmark)
|
66 |
+
| Test name | Gembo | Gembo 1.1 |
|
67 |
+
| ---------- | ---------- | ---------- |
|
68 |
+
| B | 2.5 | 2.5 |
|
69 |
+
| C | 1.5 | 1.5 |
|
70 |
+
| D | 3 | 3 |
|
71 |
+
| S | 7.5 | 6.75 |
|
72 |
+
| P | 5.25 | 5.25 |
|
73 |
+
| Total | 19.75 | 19 |
|