GGUF
Inference Endpoints
conversational
aashish1904 commited on
Commit
58e6d95
·
verified ·
1 Parent(s): a52d751

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +68 -0
README.md ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ license: apache-2.0
5
+ datasets:
6
+ - AuriAetherwiing/Allura
7
+ - kalomaze/Opus_Instruct_25k
8
+ base_model:
9
+ - AuriAetherwiing/Yi-1.5-9B-32K-tokfix
10
+
11
+ ---
12
+
13
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
14
+
15
+
16
+ # QuantFactory/EVA-Yi-1.5-9B-32K-V1-GGUF
17
+ This is quantized version of [EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1) created using llama.cpp
18
+
19
+ # Original Model Card
20
+
21
+
22
+ **EVA Yi 1.5 9B v1**
23
+
24
+ <p>
25
+ A RP/storywriting focused model, full-parameter finetune of Yi-1.5-9B-32K on mixture of synthetic and natural data.<br>
26
+ A continuation of nothingiisreal's Celeste 1.x series, made to improve stability and versatility, without losing unique, diverse writing style of Celeste.
27
+ </p>
28
+
29
+ <p>
30
+ <h3>Quants: (GGUF is not recommended, lcpp breaks tokenizer fix)</h3>
31
+ <ul>
32
+ <li><a href=https://huggingface.co/bartowski/EVA-Yi-1.5-9B-32K-V1-GGUF>IMatrix GGUF by bartowski</a></li>
33
+ <li><a href=https://huggingface.co/mradermacher/EVA-Yi-1.5-9B-32K-V1-GGUF>Static GGUF by Mradermacher</a></li>
34
+ <li><a href=https://huggingface.co/bartowski/EVA-Yi-1.5-9B-32K-V1-exl2>EXL2 by bartowski</a></li>
35
+ </ul>
36
+ We recommend using original BFloat16 weights, quantization seems to affect Yi significantly more than other model architectures.
37
+ </p>
38
+ <p>
39
+ Prompt format is ChatML.<br>
40
+ <h3>Recommended sampler values:</h3>
41
+
42
+ - Temperature: 1
43
+ - Min-P: 0.05
44
+
45
+ <h3>Recommended SillyTavern presets (via CalamitousFelicitousness):</h3>
46
+
47
+ - [Context](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Context.json)
48
+ - [Instruct and System Prompt](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Instruct.json)
49
+ </p>
50
+
51
+ <p>
52
+ <br>
53
+ <h3>
54
+ Training data:
55
+ </h3>
56
+ <ul>
57
+ <li>Celeste 70B 0.1 data mixture minus Opus Instruct subset. See that model's <a href=https://huggingface.co/nothingiisreal/L3.1-70B-Celeste-V0.1-BF16>card</a> for details.</li>
58
+ <li>Kalomaze's Opus_Instruct_25k dataset, filtered for refusals.</li></ul>
59
+ <h3>
60
+ Hardware used:
61
+ </h3>
62
+ <ul><li>4x3090Ti for 5 days.</li></ul><br>
63
+ </p>
64
+ Model was trained by Kearm and Auri.
65
+ <h4>Special thanks:</h4><ul>
66
+ <li>to Lemmy, Gryphe, Kalomaze and Nopm for the data</li>
67
+ <li>to ALK, Fizz and CalamitousFelicitousness for Yi tokenizer fix</li>
68
+ <li>and to InfermaticAI's community for their continued support for our endeavors</li></ul>