Transformers
GGUF
English
Merge
mergekit
lazymergekit
teknium/OpenHermes-2.5-Mistral-7B
openchat/openchat-3.5-0106
andrijdavid/macaroni-7b
mistralai/Mistral-7B-Instruct-v0.2
Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
Intel/neural-chat-7b-v3-1
mlabonne/Beagle14-7B
mlabonne/NeuralBeagle14-7B
Inference Endpoints
conversational
base_model: N8Programs/Daschund | |
language: | |
- en | |
library_name: transformers | |
quantized_by: mradermacher | |
tags: | |
- merge | |
- mergekit | |
- lazymergekit | |
- teknium/OpenHermes-2.5-Mistral-7B | |
- openchat/openchat-3.5-0106 | |
- andrijdavid/macaroni-7b | |
- mistralai/Mistral-7B-Instruct-v0.2 | |
- Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp | |
- Intel/neural-chat-7b-v3-1 | |
- mlabonne/Beagle14-7B | |
- mlabonne/NeuralBeagle14-7B | |
## About | |
static quants of https://huggingface.co/N8Programs/Daschund | |
<!-- provided-files --> | |
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. | |
## Usage | |
If you are unsure how to use GGUF files, refer to one of [TheBloke's | |
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for | |
more details, including on how to concatenate multi-part files. | |
## Provided Quants | |
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | |
| Link | Type | Size/GB | Notes | | |
|:-----|:-----|--------:|:------| | |
| [GGUF](https://huggingface.co/mradermacher/Daschund-GGUF/resolve/main/Daschund.Q2_K.gguf) | Q2_K | 3.0 | | | |
| [GGUF](https://huggingface.co/mradermacher/Daschund-GGUF/resolve/main/Daschund.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | |
| [GGUF](https://huggingface.co/mradermacher/Daschund-GGUF/resolve/main/Daschund.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | |
| [GGUF](https://huggingface.co/mradermacher/Daschund-GGUF/resolve/main/Daschund.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | |
| [GGUF](https://huggingface.co/mradermacher/Daschund-GGUF/resolve/main/Daschund.IQ3_M.gguf) | IQ3_M | 3.5 | | | |
| [GGUF](https://huggingface.co/mradermacher/Daschund-GGUF/resolve/main/Daschund.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | |
| [GGUF](https://huggingface.co/mradermacher/Daschund-GGUF/resolve/main/Daschund.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | |
| [GGUF](https://huggingface.co/mradermacher/Daschund-GGUF/resolve/main/Daschund.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | |
| [GGUF](https://huggingface.co/mradermacher/Daschund-GGUF/resolve/main/Daschund.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | |
| [GGUF](https://huggingface.co/mradermacher/Daschund-GGUF/resolve/main/Daschund.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | |
| [GGUF](https://huggingface.co/mradermacher/Daschund-GGUF/resolve/main/Daschund.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | |
| [GGUF](https://huggingface.co/mradermacher/Daschund-GGUF/resolve/main/Daschund.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | |
| [GGUF](https://huggingface.co/mradermacher/Daschund-GGUF/resolve/main/Daschund.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | |
| [GGUF](https://huggingface.co/mradermacher/Daschund-GGUF/resolve/main/Daschund.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | |
| [GGUF](https://huggingface.co/mradermacher/Daschund-GGUF/resolve/main/Daschund.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | |
| [GGUF](https://huggingface.co/mradermacher/Daschund-GGUF/resolve/main/Daschund.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | | |
Here is a handy graph by ikawrakow comparing some lower-quality quant | |
types (lower is better): | |
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) | |
And here are Artefact2's thoughts on the matter: | |
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 | |
## FAQ / Model Request | |
See https://huggingface.co/mradermacher/model_requests for some answers to | |
questions you might have and/or if you want some other model quantized. | |
## Thanks | |
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting | |
me use its servers and providing upgrades to my workstation to enable | |
this work in my free time. | |
<!-- end --> | |