Initial GGML model commit
Browse files
README.md
CHANGED
@@ -3,7 +3,7 @@ inference: false
|
|
3 |
license: other
|
4 |
model_creator: WizardLM
|
5 |
model_link: https://huggingface.co/WizardLM/WizardMath-7b-V1.0
|
6 |
-
model_name: WizardMath
|
7 |
model_type: llama
|
8 |
quantized_by: TheBloke
|
9 |
---
|
@@ -22,13 +22,13 @@ quantized_by: TheBloke
|
|
22 |
</div>
|
23 |
<!-- header end -->
|
24 |
|
25 |
-
# WizardMath
|
26 |
- Model creator: [WizardLM](https://huggingface.co/WizardLM)
|
27 |
-
- Original model: [WizardMath
|
28 |
|
29 |
## Description
|
30 |
|
31 |
-
This repo contains GGML format model files for [WizardLM's WizardMath
|
32 |
|
33 |
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
|
34 |
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Supports NVidia CUDA GPU acceleration.
|
@@ -40,8 +40,8 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
|
|
40 |
|
41 |
## Repositories available
|
42 |
|
43 |
-
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/WizardMath-
|
44 |
-
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/WizardMath-
|
45 |
* [WizardLM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/WizardLM/WizardMath-7b-V1.0)
|
46 |
|
47 |
## Prompt template: Alpaca-CoT
|
@@ -84,20 +84,20 @@ Refer to the Provided Files table below to see what files use which methods, and
|
|
84 |
|
85 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
86 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
87 |
-
| [wizardmath-7b-v1.0.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/WizardMath-
|
88 |
-
| [wizardmath-7b-v1.0.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/WizardMath-
|
89 |
-
| [wizardmath-7b-v1.0.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/WizardMath-
|
90 |
-
| [wizardmath-7b-v1.0.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/WizardMath-
|
91 |
-
| [wizardmath-7b-v1.0.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/WizardMath-
|
92 |
-
| [wizardmath-7b-v1.0.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/WizardMath-
|
93 |
-
| [wizardmath-7b-v1.0.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/WizardMath-
|
94 |
-
| [wizardmath-7b-v1.0.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/WizardMath-
|
95 |
-
| [wizardmath-7b-v1.0.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/WizardMath-
|
96 |
-
| [wizardmath-7b-v1.0.ggmlv3.q5_1.bin](https://huggingface.co/TheBloke/WizardMath-
|
97 |
-
| [wizardmath-7b-v1.0.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/WizardMath-
|
98 |
-
| [wizardmath-7b-v1.0.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/WizardMath-
|
99 |
-
| [wizardmath-7b-v1.0.ggmlv3.q6_K.bin](https://huggingface.co/TheBloke/WizardMath-
|
100 |
-
| [wizardmath-7b-v1.0.ggmlv3.q8_0.bin](https://huggingface.co/TheBloke/WizardMath-
|
101 |
|
102 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
103 |
|
@@ -151,7 +151,7 @@ Thank you to all my generous patrons and donaters!
|
|
151 |
|
152 |
<!-- footer end -->
|
153 |
|
154 |
-
# Original model card: WizardLM's WizardMath
|
155 |
|
156 |
|
157 |
|
|
|
3 |
license: other
|
4 |
model_creator: WizardLM
|
5 |
model_link: https://huggingface.co/WizardLM/WizardMath-7b-V1.0
|
6 |
+
model_name: WizardMath 7B V1.0
|
7 |
model_type: llama
|
8 |
quantized_by: TheBloke
|
9 |
---
|
|
|
22 |
</div>
|
23 |
<!-- header end -->
|
24 |
|
25 |
+
# WizardMath 7B V1.0 - GGML
|
26 |
- Model creator: [WizardLM](https://huggingface.co/WizardLM)
|
27 |
+
- Original model: [WizardMath 7B V1.0](https://huggingface.co/WizardLM/WizardMath-7b-V1.0)
|
28 |
|
29 |
## Description
|
30 |
|
31 |
+
This repo contains GGML format model files for [WizardLM's WizardMath 7B V1.0](https://huggingface.co/WizardLM/WizardMath-7b-V1.0).
|
32 |
|
33 |
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
|
34 |
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Supports NVidia CUDA GPU acceleration.
|
|
|
40 |
|
41 |
## Repositories available
|
42 |
|
43 |
+
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/WizardMath-7B-V1.0-GPTQ)
|
44 |
+
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/WizardMath-7B-V1.0-GGML)
|
45 |
* [WizardLM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/WizardLM/WizardMath-7b-V1.0)
|
46 |
|
47 |
## Prompt template: Alpaca-CoT
|
|
|
84 |
|
85 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
86 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
87 |
+
| [wizardmath-7b-v1.0.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/WizardMath-7B-V1.0-GGML/blob/main/wizardmath-7b-v1.0.ggmlv3.q2_K.bin) | q2_K | 2 | 3.05 GB| 5.55 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
|
88 |
+
| [wizardmath-7b-v1.0.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/WizardMath-7B-V1.0-GGML/blob/main/wizardmath-7b-v1.0.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 3.77 GB| 6.27 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
89 |
+
| [wizardmath-7b-v1.0.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/WizardMath-7B-V1.0-GGML/blob/main/wizardmath-7b-v1.0.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 3.45 GB| 5.95 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
90 |
+
| [wizardmath-7b-v1.0.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/WizardMath-7B-V1.0-GGML/blob/main/wizardmath-7b-v1.0.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 3.12 GB| 5.62 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
|
91 |
+
| [wizardmath-7b-v1.0.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/WizardMath-7B-V1.0-GGML/blob/main/wizardmath-7b-v1.0.ggmlv3.q4_0.bin) | q4_0 | 4 | 3.79 GB| 6.29 GB | Original quant method, 4-bit. |
|
92 |
+
| [wizardmath-7b-v1.0.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/WizardMath-7B-V1.0-GGML/blob/main/wizardmath-7b-v1.0.ggmlv3.q4_1.bin) | q4_1 | 4 | 4.21 GB| 6.71 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
|
93 |
+
| [wizardmath-7b-v1.0.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/WizardMath-7B-V1.0-GGML/blob/main/wizardmath-7b-v1.0.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 4.24 GB| 6.74 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
|
94 |
+
| [wizardmath-7b-v1.0.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/WizardMath-7B-V1.0-GGML/blob/main/wizardmath-7b-v1.0.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 3.98 GB| 6.48 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
|
95 |
+
| [wizardmath-7b-v1.0.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/WizardMath-7B-V1.0-GGML/blob/main/wizardmath-7b-v1.0.ggmlv3.q5_0.bin) | q5_0 | 5 | 4.63 GB| 7.13 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
|
96 |
+
| [wizardmath-7b-v1.0.ggmlv3.q5_1.bin](https://huggingface.co/TheBloke/WizardMath-7B-V1.0-GGML/blob/main/wizardmath-7b-v1.0.ggmlv3.q5_1.bin) | q5_1 | 5 | 5.06 GB| 7.56 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
|
97 |
+
| [wizardmath-7b-v1.0.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/WizardMath-7B-V1.0-GGML/blob/main/wizardmath-7b-v1.0.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 4.92 GB| 7.42 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
|
98 |
+
| [wizardmath-7b-v1.0.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/WizardMath-7B-V1.0-GGML/blob/main/wizardmath-7b-v1.0.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 4.79 GB| 7.29 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
|
99 |
+
| [wizardmath-7b-v1.0.ggmlv3.q6_K.bin](https://huggingface.co/TheBloke/WizardMath-7B-V1.0-GGML/blob/main/wizardmath-7b-v1.0.ggmlv3.q6_K.bin) | q6_K | 6 | 5.65 GB| 8.15 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
|
100 |
+
| [wizardmath-7b-v1.0.ggmlv3.q8_0.bin](https://huggingface.co/TheBloke/WizardMath-7B-V1.0-GGML/blob/main/wizardmath-7b-v1.0.ggmlv3.q8_0.bin) | q8_0 | 8 | 7.16 GB| 9.66 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
|
101 |
|
102 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
103 |
|
|
|
151 |
|
152 |
<!-- footer end -->
|
153 |
|
154 |
+
# Original model card: WizardLM's WizardMath 7B V1.0
|
155 |
|
156 |
|
157 |
|