Felladrin's picture
Update README.md
5d07d21 verified
metadata
license: apache-2.0
base_model: Isotonic/Mixnueza-6x32M-MoE

GGUF version of Isotonic/Mixnueza-6x32M-MoE.

It was not possible to quantize the model, so only the F16 and F32 GGUF files are available.

Try it with llama.cpp

brew install ggerganov/ggerganov/llama.cpp
llama-cli \
  --hf-repo Felladrin/gguf-Mixnueza-6x32M-MoE \
  --model Mixnueza-6x32M-MoE.F32.gguf \
  --random-prompt \
  --temp 1.3 \
  --dynatemp-range 1.2 \
  --top-k 0 \
  --top-p 1 \
  --min-p 0.1 \
  --typical 0.85 \
  --mirostat 2 \
  --mirostat-ent 3.5 \
  --repeat-penalty 1.1 \
  --repeat-last-n -1 \
  -n 256