x0001
commited on
Commit
·
d179fd4
0
Parent(s):
Duplicate from localmodels/LLM
Browse files- .gitattributes +35 -0
- README.md +36 -0
- llama-2-7b.ggmlv3.q2_K.bin +3 -0
- llama-2-7b.ggmlv3.q3_K_L.bin +3 -0
- llama-2-7b.ggmlv3.q3_K_M.bin +3 -0
- llama-2-7b.ggmlv3.q3_K_S.bin +3 -0
- llama-2-7b.ggmlv3.q4_0.bin +3 -0
- llama-2-7b.ggmlv3.q4_1.bin +3 -0
- llama-2-7b.ggmlv3.q4_K_M.bin +3 -0
- llama-2-7b.ggmlv3.q4_K_S.bin +3 -0
- llama-2-7b.ggmlv3.q5_0.bin +3 -0
- llama-2-7b.ggmlv3.q5_1.bin +3 -0
- llama-2-7b.ggmlv3.q5_K_M.bin +3 -0
- llama-2-7b.ggmlv3.q5_K_S.bin +3 -0
- llama-2-7b.ggmlv3.q6_K.bin +3 -0
- llama-2-7b.ggmlv3.q8_0.bin +3 -0
.gitattributes
ADDED
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
duplicated_from: localmodels/LLM
|
3 |
+
---
|
4 |
+
# Llama 2 7B ggml
|
5 |
+
|
6 |
+
From: https://huggingface.co/meta-llama/Llama-2-7b-hf
|
7 |
+
|
8 |
+
---
|
9 |
+
|
10 |
+
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
|
11 |
+
|
12 |
+
Quantized using an older version of llama.cpp and compatible with llama.cpp from May 19, commit 2d5db48.
|
13 |
+
|
14 |
+
### k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
|
15 |
+
|
16 |
+
Quantization methods compatible with latest llama.cpp from June 6, commit 2d43387.
|
17 |
+
|
18 |
+
---
|
19 |
+
|
20 |
+
## Provided files
|
21 |
+
| Name | Quant method | Bits | Size | Max RAM required, no GPU offloadingd | Use case |
|
22 |
+
| ---- | ---- | ---- | ---- | ---- | ----- |
|
23 |
+
| llama-2-7b.ggmlv3.q2_K.bin | q2_K | 2 | 2.87 GB| 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
|
24 |
+
| llama-2-7b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.60 GB| 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
25 |
+
| llama-2-7b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.28 GB| 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
26 |
+
| llama-2-7b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.95 GB| 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
|
27 |
+
| llama-2-7b.ggmlv3.q4_0.bin | q4_0 | 4 | 3.79 GB| 6.29 GB | Original quant method, 4-bit. |
|
28 |
+
| llama-2-7b.ggmlv3.q4_1.bin | q4_1 | 4 | 4.21 GB| 6.71 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
|
29 |
+
| llama-2-7b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.08 GB| 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
|
30 |
+
| llama-2-7b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.83 GB| 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
|
31 |
+
| llama-2-7b.ggmlv3.q5_0.bin | q5_0 | 5 | 4.63 GB| 7.13 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
|
32 |
+
| llama-2-7b.ggmlv3.q5_1.bin | q5_1 | 5 | 5.06 GB| 7.56 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
|
33 |
+
| llama-2-7b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.78 GB| 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
|
34 |
+
| llama-2-7b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.65 GB| 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
|
35 |
+
| llama-2-7b.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB| 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
|
36 |
+
| llama-2-7b.ggmlv3.q8_0.bin | q8_0 | 8 | 7.16 GB| 9.66 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
|
llama-2-7b.ggmlv3.q2_K.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4639919d56b05e0cf44edcee7627f345a7b3d3b35cfafb347864acadf503ef28
|
3 |
+
size 2866807424
|
llama-2-7b.ggmlv3.q3_K_L.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b32750ddc5bfb52bae45f846080d67eb9cb6be2a5843ae061dcc91cb3f0dc411
|
3 |
+
size 3596821120
|
llama-2-7b.ggmlv3.q3_K_M.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1ef7ec8f8fac2793448fdbc672afc6f43f470e72f50bb5b26c170b430e9d9746
|
3 |
+
size 3282248320
|
llama-2-7b.ggmlv3.q3_K_S.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a20353e17c3f92834a261dd59e3362576f98d213e68b2c55ab61e0a1d5e7d1a0
|
3 |
+
size 2948014720
|
llama-2-7b.ggmlv3.q4_0.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bfa26d855e44629c4cf919985e90bd7fa03b77eea1676791519e39a4d45fd4d5
|
3 |
+
size 3791725184
|
llama-2-7b.ggmlv3.q4_1.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:83c7c75d51ad6ed5dd58a3a3005375a0340aa614842ebcb2cff6596bd6dec159
|
3 |
+
size 4212859520
|
llama-2-7b.ggmlv3.q4_K_M.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:16baf59cbb724f99def35f000d1e0fd9c71938649b1856dd81d3a7e3b3a752bd
|
3 |
+
size 4080714368
|
llama-2-7b.ggmlv3.q4_K_S.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:acaf2b0976608a1ae663e48c50199fc96db038367931a3d1928135d6d8f35651
|
3 |
+
size 3825517184
|
llama-2-7b.ggmlv3.q5_0.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a3e3233c8c46d88a7c692d2204bfc7212d13397d0f984114b814ab160f7b43d2
|
3 |
+
size 4633993856
|
llama-2-7b.ggmlv3.q5_1.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f850c4b9d3448b11bfdd9a538efef014174eed839066e55396a4b47c5bc3cd03
|
3 |
+
size 5055128192
|
llama-2-7b.ggmlv3.q5_K_M.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2a352392044c1919c7380dd53e3e5dcf6bd77b97f5f0663d72da68e372483cf6
|
3 |
+
size 4782867072
|
llama-2-7b.ggmlv3.q5_K_S.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:90915617425bb045f5d2d468d1b05935b69ea4b09e4dad51c8a8d1e9186caa35
|
3 |
+
size 4651401856
|
llama-2-7b.ggmlv3.q6_K.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4a729c62010251307772561f86a49b8f9f53e569ef41dbf0f0577dea6e07acc3
|
3 |
+
size 5528904320
|
llama-2-7b.ggmlv3.q8_0.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5bb0702855f0c8abc645ea68c4d41e05207964ff54dd38c2787c1c1206cae121
|
3 |
+
size 7160799872
|