Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: mistralai/Mistral-Large-Instruct-2407
|
3 |
+
---
|
4 |
+
This is an experimental model designed for creative writing, and role playing. The capabilities should be similar to Mistral-Large but the stories have a nihilistic bias.
|
5 |
+
|
6 |
+
I have attempted to apply the optimism_vs_nihilism__debias and mistral-large:123b-optimism_vs_nihilism__nihilism control vectors [jukofyork/creative-writing-control-vectors-v3.0](https://huggingface.co/jukofyork/creative-writing-control-vectors-v3.0) by trained by [jukofyork](https://huggingface.co/jukofyork) to Mistral-Large so it can be used with exllamav2 (and other inference engines) which don't suppose control vectors.
|
7 |
+
|
8 |
+
Note: This one is a lot better than v1. The output is very similar to the control vector applied at runtime with llamacpp.
|
9 |
+
|
10 |
+
If you're using gguf/llamacpp, you're better off applying the vectors themselves since you'll have fine grained control of everything, rather than this model which has them baked into the model weights themselves.
|
11 |
+
|
12 |
+
EXL2 Quants here:
|
13 |
+
|
14 |
+
[4.5BPW](https://huggingface.co/gghfez/DarkMage-Large-v3-123b-4.5/tree/4.5bpw)
|