--- license: apache-2.0 datasets: - Open-Orca/OpenOrca - teknium/openhermes - cognitivecomputations/dolphin - jondurbin/airoboros-3.1 - unalignment/toxic-dpo-v0.1 - unalignment/spicy-3.1 language: - en --- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/A_gdBved2-hXx24-a6V1w.jpeg) # The flower of Ares. ## These are the GGUF files of the fine-tuned model. To be compiled with llama.cpp on oobabooga or VLLm. Fine-tuned on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)...[my team and I](https://huggingface.co/ConvexAI) reformatted many different datasets and included a small amount of private stuff to see how much we could improve mistral. I spoke to it personally for about an hour, and I believe we need to work on our format for the private dataset a bit more, but other than that, it turned out great. I will be uploading it to open llm evaluations, today. ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [Q2_K Tiny](https://huggingface.co/Kquant03/Medusa-7B-GGUF/blob/main/ggml-model-q2_k.gguf) | Q2_K | 2 | 2.7 GB| 4.7 GB | smallest, significant quality loss - not recommended for most purposes | | [Q3_K_M](https://huggingface.co/Kquant03/Medusa-7B-GGUF/blob/main/ggml-model-q3_k_m.gguf) | Q3_K_M | 3 | 3.52 GB| 5.52 GB | very small, high quality loss | | [Q4_0](https://huggingface.co/Kquant03/Medusa-7B-GGUF/blob/main/ggml-model-q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.11 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [Q4_K_M](https://huggingface.co/Kquant03/Medusa-7B-GGUF/blob/main/ggml-model-q4_k_m.gguf) | Q4_K_M | 4 | 4.37 GB| 6.37 GB | medium, balanced quality - recommended | | [Q5_0](https://huggingface.co/Kquant03/Medusa-7B-GGUF/blob/main/ggml-model-q5_0.gguf) | Q5_0 | 5 | 5 GB| 7 GB | legacy; large, balanced quality | | [Q5_K_M](https://huggingface.co/Kquant03/Medusa-7B-GGUF/blob/main/ggml-model-q5_k_m.gguf) | Q5_K_M | 5 | 5.13 GB| 7.13 GB | large, balanced quality - recommended | | [Q6 XL](https://huggingface.co/Kquant03/Medusa-7B-GGUF/blob/main/ggml-model-q6_k.gguf) | Q6_K | 6 | 5.94 GB| 7.94 GB | very large, extremely low quality loss | | [Q8 XXL](https://huggingface.co/Kquant03/Medusa-7B-GGUF/blob/main/ggml-model-q8_0.gguf) | Q8_0 | 8 | 7.7 GB| 9.7 GB | very large, extremely low quality loss - not recommended | - Uses Mistral prompt template with chat-instruct.