Triangle104 commited on
Commit
44809d9
·
verified ·
1 Parent(s): 8bd8325

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +74 -0
README.md CHANGED
@@ -14,6 +14,80 @@ language:
14
  This model was converted to GGUF format from [`allura-org/Qwen2.5-32b-RP-Ink`](https://huggingface.co/allura-org/Qwen2.5-32b-RP-Ink) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
15
  Refer to the [original model card](https://huggingface.co/allura-org/Qwen2.5-32b-RP-Ink) for more details on the model.
16
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  ## Use with llama.cpp
18
  Install llama.cpp through brew (works on Mac and Linux)
19
 
 
14
  This model was converted to GGUF format from [`allura-org/Qwen2.5-32b-RP-Ink`](https://huggingface.co/allura-org/Qwen2.5-32b-RP-Ink) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
15
  Refer to the [original model card](https://huggingface.co/allura-org/Qwen2.5-32b-RP-Ink) for more details on the model.
16
 
17
+ ---
18
+ Model details:
19
+ -
20
+ A roleplay-focused LoRA finetune of Qwen 2.5 32b Instruct. Methodology and hyperparams inspired by SorcererLM and Slush.
21
+ Yet another model in the Ink series, following in the footsteps of the Nemo one
22
+
23
+ Testimonials
24
+ -
25
+ whatever I tested was crack [...] It's got some refreshingly good prose, that's for sure
26
+
27
+ - TheLonelyDevil
28
+
29
+ The NTR is fantastic with this tune, lots of good gooning to be had. [...] Description and scene setting prose flows smoothly in comparison to larger models.
30
+
31
+ - TonyTheDeadly
32
+
33
+ This 32B handles complicated scenarios well, compared to a lot of 70Bs I've tried. Characters are portrayed accurately.
34
+
35
+ - Severian
36
+
37
+ From the very limited testing I did, I quite like this. [...] I really like the way it writes. Granted, I'm completely shitfaced right now, but I'm pretty sure it's good.
38
+
39
+ - ALK
40
+
41
+ [This model portrays] my character card almost exactly the way that I write them. It's a bit of a dream to get that with many of the current LLM.
42
+
43
+ - ShotMisser64
44
+
45
+ Dataset
46
+ -
47
+ The worst mix of data you've ever seen. Like, seriously, you do not want to see the things that went into this model. It's bad.
48
+
49
+ "this is like washing down an adderall with a bottle of methylated rotgut" - inflatebot
50
+
51
+ Recommended Settings
52
+ -
53
+ Chat template: ChatML
54
+
55
+ Recommended samplers (not the be-all-end-all, try some on your own!):
56
+ -
57
+ Temp 0.85 / Top P 0.8 / Top A 0.3 / Rep Pen 1.03
58
+
59
+ Your samplers can go here! :3
60
+
61
+ Hyperparams
62
+
63
+ General
64
+ -
65
+ Epochs = 1
66
+
67
+ LR = 6e-5
68
+
69
+ LR Scheduler = Cosine
70
+
71
+ Optimizer = Paged AdamW 8bit
72
+
73
+ Effective batch size = 16
74
+
75
+ LoRA
76
+ -
77
+ Rank = 16
78
+
79
+ Alpha = 32
80
+
81
+ Dropout = 0.25 (Inspiration: Slush)
82
+
83
+ Credits
84
+ -
85
+ Humongous thanks to the people who created the data. I would credit you all, but that would be cheating ;)
86
+ Big thanks to all Allura members, for testing and emotional support ilya /platonic
87
+ especially to inflatebot who made the model card's image :3
88
+ Another big thanks to all the members of the ArliAI Discord server for testing! All of the people featured in the testimonials are from there :3
89
+
90
+ ---
91
  ## Use with llama.cpp
92
  Install llama.cpp through brew (works on Mac and Linux)
93