aifeifei798
commited on
Upload README.md
Browse files
README.md
CHANGED
@@ -14,11 +14,7 @@ tags:
|
|
14 |
- Lewdiculous's superb gguf version, thank you for your conscientious and responsible dedication.
|
15 |
- https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-2.2-Uncensored-1048K-GGUF-IQ-Imatrix-Request
|
16 |
|
17 |
-
|
18 |
-
- The difference with normal quantizations is that I quantize the output and embed tensors to f16.and the other tensors to 15_k,q6_k or q8_0.This creates models that are little or not degraded at all and have a smaller size.They run at about 3-6 t/sec on CPU only using llama.cpp And obviously faster on computers with potent GPUs
|
19 |
-
- https://huggingface.co/ZeroWw/llama3-8B-DarkIdol-2.2-Uncensored-1048K-GGUF
|
20 |
-
- More models here: https://huggingface.co/RobertSinclair
|
21 |
-
|
22 |
## Why 1048K?
|
23 |
Due to the optimization of the preferred model, its performance is excellent across the range of 2000-1048K. Personal usage scenarios, such as 8186, 32K, etc., are insufficient. My primary role involves managing virtual idol Twitter accounts and assisting with singing, etc. A good conversation can be very lengthy, and sometimes even 32K is not enough. Imagine having a heated chat with your virtual girlfriend, only for it to abruptly cut off—that feeling is too painful.
|
24 |
|
@@ -39,8 +35,8 @@ The module combination has been readjusted to better fulfill various roles and h
|
|
39 |
- DarkIdol:Roles that you can imagine and those that you cannot imagine.
|
40 |
- Roleplay
|
41 |
- Specialized in various role-playing scenarios
|
42 |
-
- more look at test role. (https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.2/
|
43 |
-
- more look at LM Studio presets (https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.2/
|
44 |
|
45 |
![image/png](https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-2.2-Uncensored-1048K/resolve/main/llama3-8B-DarkIdol-2.2-Uncensored-1048K.png)
|
46 |
|
|
|
14 |
- Lewdiculous's superb gguf version, thank you for your conscientious and responsible dedication.
|
15 |
- https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-2.2-Uncensored-1048K-GGUF-IQ-Imatrix-Request
|
16 |
|
17 |
+
|
|
|
|
|
|
|
|
|
18 |
## Why 1048K?
|
19 |
Due to the optimization of the preferred model, its performance is excellent across the range of 2000-1048K. Personal usage scenarios, such as 8186, 32K, etc., are insufficient. My primary role involves managing virtual idol Twitter accounts and assisting with singing, etc. A good conversation can be very lengthy, and sometimes even 32K is not enough. Imagine having a heated chat with your virtual girlfriend, only for it to abruptly cut off—that feeling is too painful.
|
20 |
|
|
|
35 |
- DarkIdol:Roles that you can imagine and those that you cannot imagine.
|
36 |
- Roleplay
|
37 |
- Specialized in various role-playing scenarios
|
38 |
+
- more look at test role. (https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.2/tree/main/test)
|
39 |
+
- more look at LM Studio presets (https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.2/tree/main/config-presets)
|
40 |
|
41 |
![image/png](https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-2.2-Uncensored-1048K/resolve/main/llama3-8B-DarkIdol-2.2-Uncensored-1048K.png)
|
42 |
|