Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
ChristianAzinn
/
mixtral-8x22b-v0.1-imatrix
like
0
Text Generation
Transformers
GGUF
English
quantized
2-bit
3-bit
4-bit precision
5-bit
6-bit
8-bit precision
16-bit
GGUF
mixtral
Mixture of Experts
License:
apache-2.0
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
ce07e86
mixtral-8x22b-v0.1-imatrix
1 contributor
History:
2 commits
ChristianAzinn
upload q4_k
ce07e86
verified
9 months ago
.gitattributes
2.6 kB
upload q4_k
9 months ago
mixtral-8x22b-v0.1-imatrix.q4_k_m-00001-of-00006.gguf
954 MB
LFS
upload q4_k
9 months ago
mixtral-8x22b-v0.1-imatrix.q4_k_m-00002-of-00006.gguf
803 MB
LFS
upload q4_k
9 months ago
mixtral-8x22b-v0.1-imatrix.q4_k_m-00003-of-00006.gguf
843 MB
LFS
upload q4_k
9 months ago
mixtral-8x22b-v0.1-imatrix.q4_k_m-00004-of-00006.gguf
850 MB
LFS
upload q4_k
9 months ago
mixtral-8x22b-v0.1-imatrix.q4_k_m-00005-of-00006.gguf
41.6 GB
LFS
upload q4_k
9 months ago
mixtral-8x22b-v0.1-imatrix.q4_k_m-00006-of-00006.gguf
40.5 GB
LFS
upload q4_k
9 months ago
mixtral-8x22b-v0.1-imatrix.q4_k_s-00001-of-00006.gguf
954 MB
LFS
upload q4_k
9 months ago
mixtral-8x22b-v0.1-imatrix.q4_k_s-00002-of-00006.gguf
803 MB
LFS
upload q4_k
9 months ago
mixtral-8x22b-v0.1-imatrix.q4_k_s-00003-of-00006.gguf
843 MB
LFS
upload q4_k
9 months ago
mixtral-8x22b-v0.1-imatrix.q4_k_s-00004-of-00006.gguf
850 MB
LFS
upload q4_k
9 months ago
mixtral-8x22b-v0.1-imatrix.q4_k_s-00005-of-00006.gguf
39.4 GB
LFS
upload q4_k
9 months ago
mixtral-8x22b-v0.1-imatrix.q4_k_s-00006-of-00006.gguf
37.6 GB
LFS
upload q4_k
9 months ago