InferenceIllusionist
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ tags:
|
|
9 |
|
10 |
# maid-yuzu-v8-alter-iMat-GGUF
|
11 |
|
12 |
-
Quantized from fp16 with love. iMatrix file calculated from Q8 quant using an input file from [this discussion](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
|
13 |
|
14 |
For a brief rundown of iMatrix quant performance please see this [PR](https://github.com/ggerganov/llama.cpp/pull/5747)
|
15 |
|
|
|
9 |
|
10 |
# maid-yuzu-v8-alter-iMat-GGUF
|
11 |
|
12 |
+
<b>Highly requested model.</b> Quantized from fp16 with love. iMatrix file calculated from Q8 quant using an input file from [this discussion](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
|
13 |
|
14 |
For a brief rundown of iMatrix quant performance please see this [PR](https://github.com/ggerganov/llama.cpp/pull/5747)
|
15 |
|