mukel commited on
Commit
44acbe9
·
verified ·
1 Parent(s): 8cdb382

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -3
README.md CHANGED
@@ -1,3 +1,21 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ # Pure quantizations of `Mistral-7B-Instruct-v0.3` for [mistral.java](https://github.com/mukel/mistral.java).
5
+
6
+ In the wild, Q8_0 quantizations are fine, but Q4_0 quantizations are rarely pure e.g. the output.weights tensor is quantized with Q6_K, instead of Q4_0.
7
+ A pure Q4_0 quantization can be generated from a high precision (F32, F16, BFLOAT16) .gguf source with the quantize utility from llama.cpp as follows:
8
+
9
+ ```
10
+ ./quantize --pure ./Mistral-7B-Instruct-v0.3-F32.gguf ./Mistral-7B-Instruct-v0.3-Q4_0.gguf Q4_0
11
+ ```
12
+
13
+ Original model: [https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3)
14
+
15
+ ****Note that this model does not support a System prompt.**
16
+
17
+ The Mistral-7B-Instruct-v0.3 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.3.
18
+ Mistral-7B-v0.3 has the following changes compared to Mistral-7B-v0.2
19
+ - Extended vocabulary to 32768
20
+ - Supports v3 Tokenizer
21
+ - Supports function calling