--- base_model: mistralai/Mistral-Large-Instruct-2407 --- This is an experimental model designed for creative writing, and role playing. The capabilities should be similar to Mistral-Large but the stories have a nihilistic bias. I have attempted to apply the optimism_vs_nihilism__debias and mistral-large:123b-optimism_vs_nihilism__nihilism control vectors [jukofyork/creative-writing-control-vectors-v3.0](https://huggingface.co/jukofyork/creative-writing-control-vectors-v3.0) by trained by [jukofyork](https://huggingface.co/jukofyork) to Mistral-Large so it can be used with exllamav2 (and other inference engines) which don't suppose control vectors. Note: This one is a lot better than v1. The output is very similar to the control vector applied at runtime with llamacpp. If you're using gguf/llamacpp, you're better off applying the vectors themselves since you'll have fine grained control of everything, rather than this model which has them baked into the model weights themselves. EXL2 Quants here: [4.5BPW](https://huggingface.co/gghfez/DarkMage-Large-v3-123b-4.5/tree/4.5bpw)