Llamacpp quants
Browse files- .gitattributes +16 -0
- README.md +34 -0
- Tess-7B-v2.0-IQ3_M.gguf +3 -0
- Tess-7B-v2.0-IQ3_S.gguf +3 -0
- Tess-7B-v2.0-IQ4_NL.gguf +3 -0
- Tess-7B-v2.0-IQ4_XS.gguf +3 -0
- Tess-7B-v2.0-Q2_K.gguf +3 -0
- Tess-7B-v2.0-Q3_K_L.gguf +3 -0
- Tess-7B-v2.0-Q3_K_M.gguf +3 -0
- Tess-7B-v2.0-Q3_K_S.gguf +3 -0
- Tess-7B-v2.0-Q4_0.gguf +3 -0
- Tess-7B-v2.0-Q4_K_M.gguf +3 -0
- Tess-7B-v2.0-Q4_K_S.gguf +3 -0
- Tess-7B-v2.0-Q5_0.gguf +3 -0
- Tess-7B-v2.0-Q5_K_M.gguf +3 -0
- Tess-7B-v2.0-Q5_K_S.gguf +3 -0
- Tess-7B-v2.0-Q6_K.gguf +3 -0
- Tess-7B-v2.0-Q8_0.gguf +3 -0
.gitattributes
CHANGED
@@ -33,3 +33,19 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
Tess-7B-v2.0-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
|
37 |
+
Tess-7B-v2.0-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
+
Tess-7B-v2.0-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
+
Tess-7B-v2.0-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
40 |
+
Tess-7B-v2.0-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
41 |
+
Tess-7B-v2.0-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
42 |
+
Tess-7B-v2.0-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
43 |
+
Tess-7B-v2.0-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
44 |
+
Tess-7B-v2.0-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
|
45 |
+
Tess-7B-v2.0-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
46 |
+
Tess-7B-v2.0-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
47 |
+
Tess-7B-v2.0-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
|
48 |
+
Tess-7B-v2.0-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
49 |
+
Tess-7B-v2.0-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
50 |
+
Tess-7B-v2.0-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
51 |
+
Tess-7B-v2.0-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
quantized_by: bartowski
|
4 |
+
pipeline_tag: text-generation
|
5 |
+
---
|
6 |
+
|
7 |
+
## Llamacpp Quantizations of Tess-7B-v2.0
|
8 |
+
|
9 |
+
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2440">b2440</a> for quantization.
|
10 |
+
|
11 |
+
Original model: https://huggingface.co/migtissera/Tess-7B-v2.0
|
12 |
+
|
13 |
+
Download a file (not the whole branch) from below:
|
14 |
+
|
15 |
+
| Filename | Quant type | File Size | Description |
|
16 |
+
| -------- | ---------- | --------- | ----------- |
|
17 |
+
| [Tess-7B-v2.0-Q8_0.gguf](https://huggingface.co/bartowski/Tess-7B-v2.0-GGUF/blob/main/Tess-7B-v2.0-Q8_0.gguf) | Q8_0 | 7.69GB | Extremely high quality, generally unneeded but max available quant. |
|
18 |
+
| [Tess-7B-v2.0-Q6_K.gguf](https://huggingface.co/bartowski/Tess-7B-v2.0-GGUF/blob/main/Tess-7B-v2.0-Q6_K.gguf) | Q6_K | 5.94GB | Very high quality, near perfect, *recommended*. |
|
19 |
+
| [Tess-7B-v2.0-Q5_K_M.gguf](https://huggingface.co/bartowski/Tess-7B-v2.0-GGUF/blob/main/Tess-7B-v2.0-Q5_K_M.gguf) | Q5_K_M | 5.13GB | High quality, very usable. |
|
20 |
+
| [Tess-7B-v2.0-Q5_K_S.gguf](https://huggingface.co/bartowski/Tess-7B-v2.0-GGUF/blob/main/Tess-7B-v2.0-Q5_K_S.gguf) | Q5_K_S | 4.99GB | High quality, very usable. |
|
21 |
+
| [Tess-7B-v2.0-Q5_0.gguf](https://huggingface.co/bartowski/Tess-7B-v2.0-GGUF/blob/main/Tess-7B-v2.0-Q5_0.gguf) | Q5_0 | 4.99GB | High quality, older format, generally not recommended. |
|
22 |
+
| [Tess-7B-v2.0-Q4_K_M.gguf](https://huggingface.co/bartowski/Tess-7B-v2.0-GGUF/blob/main/Tess-7B-v2.0-Q4_K_M.gguf) | Q4_K_M | 4.36GB | Good quality, similar to 4.25 bpw. |
|
23 |
+
| [Tess-7B-v2.0-Q4_K_S.gguf](https://huggingface.co/bartowski/Tess-7B-v2.0-GGUF/blob/main/Tess-7B-v2.0-Q4_K_S.gguf) | Q4_K_S | 4.14GB | Slightly lower quality with small space savings. |
|
24 |
+
| [Tess-7B-v2.0-IQ4_NL.gguf](https://huggingface.co/bartowski/Tess-7B-v2.0-GGUF/blob/main/Tess-7B-v2.0-IQ4_NL.gguf) | IQ4_NL | 4.15GB | Good quality, similar to Q4_K_S, new method of quanting, |
|
25 |
+
| [Tess-7B-v2.0-IQ4_XS.gguf](https://huggingface.co/bartowski/Tess-7B-v2.0-GGUF/blob/main/Tess-7B-v2.0-IQ4_XS.gguf) | IQ4_XS | 3.94GB | Decent quality, new method with similar performance to Q4. |
|
26 |
+
| [Tess-7B-v2.0-Q4_0.gguf](https://huggingface.co/bartowski/Tess-7B-v2.0-GGUF/blob/main/Tess-7B-v2.0-Q4_0.gguf) | Q4_0 | 4.10GB | Decent quality, older format, generally not recommended. |
|
27 |
+
| [Tess-7B-v2.0-IQ3_M.gguf](https://huggingface.co/bartowski/Tess-7B-v2.0-GGUF/blob/main/Tess-7B-v2.0-IQ3_M.gguf) | IQ3_M | 3.28GB | Medium-low quality, new method with decent performance. |
|
28 |
+
| [Tess-7B-v2.0-IQ3_S.gguf](https://huggingface.co/bartowski/Tess-7B-v2.0-GGUF/blob/main/Tess-7B-v2.0-IQ3_S.gguf) | IQ3_S | 3.18GB | Lower quality, new method with decent performance, recommended over Q3 quants. |
|
29 |
+
| [Tess-7B-v2.0-Q3_K_L.gguf](https://huggingface.co/bartowski/Tess-7B-v2.0-GGUF/blob/main/Tess-7B-v2.0-Q3_K_L.gguf) | Q3_K_L | 3.82GB | Lower quality but usable, good for low RAM availability. |
|
30 |
+
| [Tess-7B-v2.0-Q3_K_M.gguf](https://huggingface.co/bartowski/Tess-7B-v2.0-GGUF/blob/main/Tess-7B-v2.0-Q3_K_M.gguf) | Q3_K_M | 3.51GB | Even lower quality. |
|
31 |
+
| [Tess-7B-v2.0-Q3_K_S.gguf](https://huggingface.co/bartowski/Tess-7B-v2.0-GGUF/blob/main/Tess-7B-v2.0-Q3_K_S.gguf) | Q3_K_S | 3.16GB | Low quality, not recommended. |
|
32 |
+
| [Tess-7B-v2.0-Q2_K.gguf](https://huggingface.co/bartowski/Tess-7B-v2.0-GGUF/blob/main/Tess-7B-v2.0-Q2_K.gguf) | Q2_K | 2.71GB | Extremely low quality, *not* recommended.
|
33 |
+
|
34 |
+
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
Tess-7B-v2.0-IQ3_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e66f36b5338cc43c666b4021dfab18ee8dff3fc315d519e5f39467eeced29623
|
3 |
+
size 3284891936
|
Tess-7B-v2.0-IQ3_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:69b5c9cd9ea16a9ba186460c048811cdaaaa418666249b1ffa5b6e36167d939b
|
3 |
+
size 3182393632
|
Tess-7B-v2.0-IQ4_NL.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:05f0a587aba80f335cb476d5e8ecf2f1b301693589028e5ed0cd327eed08e8a1
|
3 |
+
size 4155054368
|
Tess-7B-v2.0-IQ4_XS.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:35c97763ed2f4978b1cf53e282fdf6725e5709b5e39e7f347d732375d04884d0
|
3 |
+
size 3944388896
|
Tess-7B-v2.0-Q2_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ed490e50841dc9ad7bac8e6b0f663fdd06ecf5c2f693e3e5ac243f6b25bed7cd
|
3 |
+
size 2719242528
|
Tess-7B-v2.0-Q3_K_L.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:590958788a8e46d12a5fbba183b1a7444bce6c52518db6874dcce1e8c778264f
|
3 |
+
size 3822024992
|
Tess-7B-v2.0-Q3_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c558f96b8d774fcb0e1e5a479ca922ac36462f9345d5f4498166f96cf0a64946
|
3 |
+
size 3518986528
|
Tess-7B-v2.0-Q3_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f080a3e73c12bd58e95f5c9c62801f0f64298a5568cf5de51f8f359ad3b83225
|
3 |
+
size 3164567840
|
Tess-7B-v2.0-Q4_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8f46fd90a84dda8bad39fbe7383a740cabb9ed8014942329c695cff614f023f3
|
3 |
+
size 4108917024
|
Tess-7B-v2.0-Q4_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:dd63bda362e8b40c3f00e484334e431f06761b4b7559cec59298cf012fac837b
|
3 |
+
size 4368439584
|
Tess-7B-v2.0-Q4_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:59a0cc5a9fa45d50e58f98c5ea0068768175c772af20084aeb8601125cb1413d
|
3 |
+
size 4140374304
|
Tess-7B-v2.0-Q5_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0ef681bb7bd7c1d5255d1bfc31f95da33e0efbb311fa35806920795fbb20ec63
|
3 |
+
size 4997716256
|
Tess-7B-v2.0-Q5_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c0d22409ef1537faf0692b9eddd5ddf3a377d18a39b497c1174afed9f1046151
|
3 |
+
size 5131409696
|
Tess-7B-v2.0-Q5_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ba3c47c851241d0421b3fe1d253389d235f5684678ec5ea13db6fdba156cbe1a
|
3 |
+
size 4997716256
|
Tess-7B-v2.0-Q6_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b21eedb539178cdc7f11a548c601b09b981904cdc79d5bee679fccd18943344f
|
3 |
+
size 5942065440
|
Tess-7B-v2.0-Q8_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c3dc282ab51fb96fb34e4fd5e6379182842b2159da65b1f8bdeed214f027570c
|
3 |
+
size 7695857952
|