--- license: apache-2.0 tags: - java - mistral - mistral.java --- # Pure quantizations of `Mistral-7B-Instruct-v0.3` for [mistral.java](https://github.com/mukel/mistral.java). In the wild, Q8_0 quantizations are fine, but Q4_0 quantizations are rarely pure e.g. the output.weights tensor is quantized with Q6_K, instead of Q4_0. A pure Q4_0 quantization can be generated from a high precision (F32, F16, BFLOAT16) .gguf source with the quantize utility from llama.cpp as follows: ``` ./quantize --pure ./Mistral-7B-Instruct-v0.3-F32.gguf ./Mistral-7B-Instruct-v0.3-Q4_0.gguf Q4_0 ``` Original model: [https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) ****Note that this model does not support a System prompt.** The Mistral-7B-Instruct-v0.3 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.3. Mistral-7B-v0.3 has the following changes compared to Mistral-7B-v0.2 - Extended vocabulary to 32768 - Supports v3 Tokenizer - Supports function calling