morriszms commited on
Commit
70a9deb
·
verified ·
1 Parent(s): 81a2eed

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ llama-3-debug-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ llama-3-debug-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ llama-3-debug-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ llama-3-debug-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ llama-3-debug-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ llama-3-debug-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ llama-3-debug-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ llama-3-debug-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ llama-3-debug-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ llama-3-debug-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ llama-3-debug-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ llama-3-debug-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - llama-3
5
+ - tiny-llama
6
+ - nano-llama
7
+ - small-llama
8
+ - random-llama
9
+ - tiny
10
+ - small
11
+ - nano
12
+ - random
13
+ - debug
14
+ - llama-3-debug
15
+ - gpt
16
+ - generation
17
+ - xiaodongguaAIGC
18
+ - TensorBlock
19
+ - GGUF
20
+ pipeline_tag: text-generation
21
+ language:
22
+ - en
23
+ - zh
24
+ base_model: xiaodongguaAIGC/llama-3-debug
25
+ ---
26
+
27
+ <div style="width: auto; margin-left: auto; margin-right: auto">
28
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
29
+ </div>
30
+ <div style="display: flex; justify-content: space-between; width: 100%;">
31
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
32
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">
33
+ Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
34
+ </p>
35
+ </div>
36
+ </div>
37
+
38
+ ## xiaodongguaAIGC/llama-3-debug - GGUF
39
+
40
+ This repo contains GGUF format model files for [xiaodongguaAIGC/llama-3-debug](https://huggingface.co/xiaodongguaAIGC/llama-3-debug).
41
+
42
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
43
+
44
+ <div style="text-align: left; margin: 20px 0;">
45
+ <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
46
+ Run them on the TensorBlock client using your local machine ↗
47
+ </a>
48
+ </div>
49
+
50
+ ## Prompt template
51
+
52
+ ```
53
+
54
+ ```
55
+
56
+ ## Model file specification
57
+
58
+ | Filename | Quant type | File Size | Description |
59
+ | -------- | ---------- | --------- | ----------- |
60
+ | [llama-3-debug-Q2_K.gguf](https://huggingface.co/tensorblock/llama-3-debug-GGUF/blob/main/llama-3-debug-Q2_K.gguf) | Q2_K | 0.021 GB | smallest, significant quality loss - not recommended for most purposes |
61
+ | [llama-3-debug-Q3_K_S.gguf](https://huggingface.co/tensorblock/llama-3-debug-GGUF/blob/main/llama-3-debug-Q3_K_S.gguf) | Q3_K_S | 0.021 GB | very small, high quality loss |
62
+ | [llama-3-debug-Q3_K_M.gguf](https://huggingface.co/tensorblock/llama-3-debug-GGUF/blob/main/llama-3-debug-Q3_K_M.gguf) | Q3_K_M | 0.021 GB | very small, high quality loss |
63
+ | [llama-3-debug-Q3_K_L.gguf](https://huggingface.co/tensorblock/llama-3-debug-GGUF/blob/main/llama-3-debug-Q3_K_L.gguf) | Q3_K_L | 0.021 GB | small, substantial quality loss |
64
+ | [llama-3-debug-Q4_0.gguf](https://huggingface.co/tensorblock/llama-3-debug-GGUF/blob/main/llama-3-debug-Q4_0.gguf) | Q4_0 | 0.021 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
65
+ | [llama-3-debug-Q4_K_S.gguf](https://huggingface.co/tensorblock/llama-3-debug-GGUF/blob/main/llama-3-debug-Q4_K_S.gguf) | Q4_K_S | 0.022 GB | small, greater quality loss |
66
+ | [llama-3-debug-Q4_K_M.gguf](https://huggingface.co/tensorblock/llama-3-debug-GGUF/blob/main/llama-3-debug-Q4_K_M.gguf) | Q4_K_M | 0.022 GB | medium, balanced quality - recommended |
67
+ | [llama-3-debug-Q5_0.gguf](https://huggingface.co/tensorblock/llama-3-debug-GGUF/blob/main/llama-3-debug-Q5_0.gguf) | Q5_0 | 0.022 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
68
+ | [llama-3-debug-Q5_K_S.gguf](https://huggingface.co/tensorblock/llama-3-debug-GGUF/blob/main/llama-3-debug-Q5_K_S.gguf) | Q5_K_S | 0.023 GB | large, low quality loss - recommended |
69
+ | [llama-3-debug-Q5_K_M.gguf](https://huggingface.co/tensorblock/llama-3-debug-GGUF/blob/main/llama-3-debug-Q5_K_M.gguf) | Q5_K_M | 0.023 GB | large, very low quality loss - recommended |
70
+ | [llama-3-debug-Q6_K.gguf](https://huggingface.co/tensorblock/llama-3-debug-GGUF/blob/main/llama-3-debug-Q6_K.gguf) | Q6_K | 0.025 GB | very large, extremely low quality loss |
71
+ | [llama-3-debug-Q8_0.gguf](https://huggingface.co/tensorblock/llama-3-debug-GGUF/blob/main/llama-3-debug-Q8_0.gguf) | Q8_0 | 0.025 GB | very large, extremely low quality loss - not recommended |
72
+
73
+
74
+ ## Downloading instruction
75
+
76
+ ### Command line
77
+
78
+ Firstly, install Huggingface Client
79
+
80
+ ```shell
81
+ pip install -U "huggingface_hub[cli]"
82
+ ```
83
+
84
+ Then, downoad the individual model file the a local directory
85
+
86
+ ```shell
87
+ huggingface-cli download tensorblock/llama-3-debug-GGUF --include "llama-3-debug-Q2_K.gguf" --local-dir MY_LOCAL_DIR
88
+ ```
89
+
90
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
91
+
92
+ ```shell
93
+ huggingface-cli download tensorblock/llama-3-debug-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
94
+ ```
llama-3-debug-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1a046bee9ad97615dc446a6e6377012bd7122b46a6c9ce172cb1ab3471dc2539
3
+ size 21181792
llama-3-debug-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:09b2a86647240253c900209d3bb6719e530c692b0da46293ae636997f1402ece
3
+ size 21184864
llama-3-debug-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1241141580047a49762e0e065b00ade1bd2a49e33c8e288593f9ab2e87e32149
3
+ size 21184096
llama-3-debug-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95928dbcf49a9d522731881c8f45178659ce11b520af0477cac37647bfd7acdd
3
+ size 21181792
llama-3-debug-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:10552ac5788fe32eb9fea84ccc6fd69fc810453a44f041b7b89f9a1937a12b87
3
+ size 21181792
llama-3-debug-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:91d65f509c0d841b54baa3ba239db3fdf611aca24da8c9c4f11e93fd4727bac8
3
+ size 22217568
llama-3-debug-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:77fd8bf2fb026febc5760868058a2f2bb631b295992a44b48095f590bee89498
3
+ size 22213216
llama-3-debug-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0036f1c519ac3b521bc913c6a2b0d7fde582e5f30233c7f20da794804f87901b
3
+ size 22212960
llama-3-debug-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e827df51214e15f1f422727a17a2439ebebf02ff96870f8ef2a53ea21afd8e66
3
+ size 22732384
llama-3-debug-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7cd7d82266fd04ca562238e90f066bc19ecb9726473096b2d5d332990271c7ef
3
+ size 22728544
llama-3-debug-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8edaff87a56efe9d3952bcf4f8e3f277edd373f6f8ec567cf96133c9d1800213
3
+ size 25306464
llama-3-debug-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:32419a7067f3b34581459cc4b65e3fa1cf165a512e9f08759ef9f61fc25971a9
3
+ size 25306464