FantasiaFoundry
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -24,7 +24,7 @@ tags:
|
|
24 |
>
|
25 |
> **Experimental:**
|
26 |
>
|
27 |
-
> There is a new experimental script added, `gguf-imat-llama-3-lossless.py`, which performs the conversions directly from a BF16 GGUF to hopefully generate lossless, or as close to that for now, Llama-3 model quantizations avoiding the recent talked about issues on that topic, it is more resource intensive and will generate more writes in the drive as there's a whole additional conversion step that isn't performed in the previous version. This should
|
28 |
|
29 |
|
30 |
Pull Requests with your own features and improvements to this script are always welcome.
|
|
|
24 |
>
|
25 |
> **Experimental:**
|
26 |
>
|
27 |
+
> There is a new experimental script added, `gguf-imat-llama-3-lossless.py`, which performs the conversions directly from a BF16 GGUF to hopefully generate lossless, or as close to that for now, Llama-3 model quantizations avoiding the recent talked about issues on that topic, it is more resource intensive and will generate more writes in the drive as there's a whole additional conversion step that isn't performed in the previous version. This should only be necessary until we have GPU support for BF16 to run directly without conversion.
|
28 |
|
29 |
|
30 |
Pull Requests with your own features and improvements to this script are always welcome.
|