|
--- |
|
base_model: perlthoughts/Chupacabra-8x7B-MoE |
|
language: |
|
- en |
|
library_name: transformers |
|
license: apache-2.0 |
|
no_imatrix: '[ 1/ 995] blk.0.ffn_up.4.weight - [ 4096, 14336, 1, 1], |
|
type = f16, converting to iq3_xxs .. Oops: found point 1016 not on grid: 8 127 |
|
0 0' |
|
quantized_by: mradermacher |
|
tags: |
|
- moe |
|
--- |
|
## About |
|
|
|
weighted/imatrix quants of https://huggingface.co/perlthoughts/Chupacabra-8x7B-MoE |
|
|
|
(llama crashed when trying to create I-quants, so only normal ones provided) |
|
|
|
<!-- provided-files --> |
|
## Usage |
|
|
|
If you are unsure how to use GGUF files, refer to one of [TheBloke's |
|
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for |
|
more details, including on how to concatenate multi-part files. |
|
|
|
## Provided Quants |
|
|
|
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) |
|
|
|
| Link | Type | Size/GB | Notes | |
|
|:-----|:-----|--------:|:------| |
|
| [GGUF](https://huggingface.co/mradermacher/Chupacabra-8x7B-MoE-i1-GGUF/resolve/main/Chupacabra-8x7B-MoE.i1-IQ2_M.gguf) | i1-IQ2_M | 15.4 | | |
|
| [GGUF](https://huggingface.co/mradermacher/Chupacabra-8x7B-MoE-i1-GGUF/resolve/main/Chupacabra-8x7B-MoE.i1-Q2_K.gguf) | i1-Q2_K | 17.6 | IQ3_XXS probably better | |
|
| [GGUF](https://huggingface.co/mradermacher/Chupacabra-8x7B-MoE-i1-GGUF/resolve/main/Chupacabra-8x7B-MoE.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.7 | IQ3_XS probably better | |
|
| [GGUF](https://huggingface.co/mradermacher/Chupacabra-8x7B-MoE-i1-GGUF/resolve/main/Chupacabra-8x7B-MoE.i1-Q3_K_M.gguf) | i1-Q3_K_M | 22.8 | IQ3_S probably better | |
|
| [GGUF](https://huggingface.co/mradermacher/Chupacabra-8x7B-MoE-i1-GGUF/resolve/main/Chupacabra-8x7B-MoE.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.4 | IQ3_M probably better | |
|
| [GGUF](https://huggingface.co/mradermacher/Chupacabra-8x7B-MoE-i1-GGUF/resolve/main/Chupacabra-8x7B-MoE.i1-Q4_K_S.gguf) | i1-Q4_K_S | 27.0 | optimal size/speed/quality | |
|
| [GGUF](https://huggingface.co/mradermacher/Chupacabra-8x7B-MoE-i1-GGUF/resolve/main/Chupacabra-8x7B-MoE.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.7 | fast, recommended | |
|
| [GGUF](https://huggingface.co/mradermacher/Chupacabra-8x7B-MoE-i1-GGUF/resolve/main/Chupacabra-8x7B-MoE.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.5 | | |
|
| [GGUF](https://huggingface.co/mradermacher/Chupacabra-8x7B-MoE-i1-GGUF/resolve/main/Chupacabra-8x7B-MoE.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.5 | | |
|
| [GGUF](https://huggingface.co/mradermacher/Chupacabra-8x7B-MoE-i1-GGUF/resolve/main/Chupacabra-8x7B-MoE.i1-Q6_K.gguf) | i1-Q6_K | 38.6 | practically like static Q6_K | |
|
|
|
Here is a handy graph by ikawrakow comparing some lower-quality quant |
|
types (lower is better): |
|
|
|
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) |
|
|
|
And here are Artefact2's thoughts on the matter: |
|
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 |
|
|
|
## FAQ / Model Request |
|
|
|
See https://huggingface.co/mradermacher/model_requests for some answers to |
|
questions you might have and/or if you want some other model quantized. |
|
|
|
## Thanks |
|
|
|
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting |
|
me use its servers and providing upgrades to my workstation to enable |
|
this work in my free time. |
|
|
|
<!-- end --> |
|
|