Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
bartowski
/
Monstral-123B-GGUF
like
0
Text Generation
GGUF
English
chat
Inference Endpoints
imatrix
conversational
License:
mrl
Model card
Files
Files and versions
Community
2
Deploy
Use this model
3e8d0a0
Monstral-123B-GGUF
1 contributor
History:
16 commits
bartowski
Upload Monstral-123B-IQ2_M.gguf with huggingface_hub
3e8d0a0
verified
3 months ago
Monstral-123B-IQ3_M
Upload folder using huggingface_hub
3 months ago
Monstral-123B-IQ4_XS
Upload folder using huggingface_hub
3 months ago
Monstral-123B-Q3_K_L
Upload folder using huggingface_hub
3 months ago
Monstral-123B-Q3_K_M
Upload folder using huggingface_hub
3 months ago
Monstral-123B-Q3_K_S
Upload folder using huggingface_hub
3 months ago
Monstral-123B-Q3_K_XL
Upload folder using huggingface_hub
3 months ago
Monstral-123B-Q4_0
Upload folder using huggingface_hub
3 months ago
Monstral-123B-Q4_K_M
Upload folder using huggingface_hub
3 months ago
Monstral-123B-Q5_K_M
Upload folder using huggingface_hub
3 months ago
Monstral-123B-Q6_K
Upload folder using huggingface_hub
3 months ago
Monstral-123B-Q8_0
Upload folder using huggingface_hub
3 months ago
.gitattributes
Safe
4.28 kB
Upload Monstral-123B-IQ2_M.gguf with huggingface_hub
3 months ago
Monstral-123B-IQ2_M.gguf
Safe
41.6 GB
LFS
Upload Monstral-123B-IQ2_M.gguf with huggingface_hub
3 months ago
Monstral-123B-IQ3_XXS.gguf
Safe
47 GB
LFS
Upload Monstral-123B-IQ3_XXS.gguf with huggingface_hub
3 months ago
Monstral-123B-Q2_K.gguf
Safe
45.2 GB
LFS
Upload Monstral-123B-Q2_K.gguf with huggingface_hub
3 months ago
Monstral-123B-Q2_K_L.gguf
Safe
45.6 GB
LFS
Upload Monstral-123B-Q2_K_L.gguf with huggingface_hub
3 months ago