morriszms commited on
Commit
8bd3169
·
verified ·
1 Parent(s): 5f3bb0e

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ EZO-Llama-3.2-3B-Instruct-dpoE-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ EZO-Llama-3.2-3B-Instruct-dpoE-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ EZO-Llama-3.2-3B-Instruct-dpoE-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ EZO-Llama-3.2-3B-Instruct-dpoE-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ EZO-Llama-3.2-3B-Instruct-dpoE-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ EZO-Llama-3.2-3B-Instruct-dpoE-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ EZO-Llama-3.2-3B-Instruct-dpoE-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ EZO-Llama-3.2-3B-Instruct-dpoE-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ EZO-Llama-3.2-3B-Instruct-dpoE-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ EZO-Llama-3.2-3B-Instruct-dpoE-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ EZO-Llama-3.2-3B-Instruct-dpoE-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ EZO-Llama-3.2-3B-Instruct-dpoE-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
EZO-Llama-3.2-3B-Instruct-dpoE-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:92cb503d7126c66ad28d8fb5aa70daf87679fe9f4231bc5e8915d7250b2e770d
3
+ size 1363935936
EZO-Llama-3.2-3B-Instruct-dpoE-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de0c558bd6a5b996f9c1a61de9884e988aed3e085f8dee8e422fb65ca3b3011a
3
+ size 1815347904
EZO-Llama-3.2-3B-Instruct-dpoE-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:36e362e4028b4502f02cd29a2bcd7a131e075c9a1768845c3eae425f379afd6f
3
+ size 1687159488
EZO-Llama-3.2-3B-Instruct-dpoE-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8d2e28b8e537dbb5aa954be71d53d893dba6d513dd273815d53e9f30c8225a2
3
+ size 1542849216
EZO-Llama-3.2-3B-Instruct-dpoE-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a901c5c99a0220523e61bdd24863687da1db17b9b10b208a64ac7afa3b2cc6e6
3
+ size 1917190848
EZO-Llama-3.2-3B-Instruct-dpoE-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d2cefd0f0fbe66376a89cbbdd5284202465526bce573361f1040f93fe081425
3
+ size 2019377856
EZO-Llama-3.2-3B-Instruct-dpoE-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7f1435c883c6783b94e2247af98fe4227a4bbafa42086b23a6d094cc4fddcba
3
+ size 1928200896
EZO-Llama-3.2-3B-Instruct-dpoE-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:126ff9da740c96d4ed72b25ce444c7a762dc5d5ac531c1ad3cf07a75fa41f639
3
+ size 2269512384
EZO-Llama-3.2-3B-Instruct-dpoE-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:939e2b4b91c0287e9849f8f1ff3dca7d56cadfa4f8d64567b04a1acfc638ac3d
3
+ size 2322154176
EZO-Llama-3.2-3B-Instruct-dpoE-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d2b353ffd1b8f4ff4ad9890f1f9a161e306708196adf0f94a50217360958becc
3
+ size 2269512384
EZO-Llama-3.2-3B-Instruct-dpoE-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:35bef0d0396fd3b5a3fa7d37e79c443fb276f8c8d1499ba44b778034eca484cc
3
+ size 2643854016
EZO-Llama-3.2-3B-Instruct-dpoE-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f8f0c94bb322563f2b82498966c59cc06c5f50bc71df32e5c3a829e60e75dede
3
+ size 3421899456
README.md ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - jp
5
+ - de
6
+ - fr
7
+ - it
8
+ - pt
9
+ - hi
10
+ - es
11
+ - th
12
+ library_name: transformers
13
+ license: llama3.2
14
+ base_model: AXCXEPT/EZO-Llama-3.2-3B-Instruct-dpoE
15
+ tags:
16
+ - facebook
17
+ - meta
18
+ - pytorch
19
+ - llama
20
+ - llama-3
21
+ - TensorBlock
22
+ - GGUF
23
+ pipeline_tag: text-generation
24
+ ---
25
+
26
+ <div style="width: auto; margin-left: auto; margin-right: auto">
27
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
28
+ </div>
29
+ <div style="display: flex; justify-content: space-between; width: 100%;">
30
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
31
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">
32
+ Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
33
+ </p>
34
+ </div>
35
+ </div>
36
+
37
+ ## AXCXEPT/EZO-Llama-3.2-3B-Instruct-dpoE - GGUF
38
+
39
+ This repo contains GGUF format model files for [AXCXEPT/EZO-Llama-3.2-3B-Instruct-dpoE](https://huggingface.co/AXCXEPT/EZO-Llama-3.2-3B-Instruct-dpoE).
40
+
41
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
42
+
43
+ <div style="text-align: left; margin: 20px 0;">
44
+ <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
45
+ Run them on the TensorBlock client using your local machine ↗
46
+ </a>
47
+ </div>
48
+
49
+ ## Prompt template
50
+
51
+ ```
52
+ <|begin_of_text|><|start_header_id|>system<|end_header_id|>
53
+
54
+ Cutting Knowledge Date: December 2023
55
+ Today Date: 26 Nov 2024
56
+
57
+ {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
58
+
59
+ {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
60
+ ```
61
+
62
+ ## Model file specification
63
+
64
+ | Filename | Quant type | File Size | Description |
65
+ | -------- | ---------- | --------- | ----------- |
66
+ | [EZO-Llama-3.2-3B-Instruct-dpoE-Q2_K.gguf](https://huggingface.co/tensorblock/EZO-Llama-3.2-3B-Instruct-dpoE-GGUF/blob/main/EZO-Llama-3.2-3B-Instruct-dpoE-Q2_K.gguf) | Q2_K | 1.364 GB | smallest, significant quality loss - not recommended for most purposes |
67
+ | [EZO-Llama-3.2-3B-Instruct-dpoE-Q3_K_S.gguf](https://huggingface.co/tensorblock/EZO-Llama-3.2-3B-Instruct-dpoE-GGUF/blob/main/EZO-Llama-3.2-3B-Instruct-dpoE-Q3_K_S.gguf) | Q3_K_S | 1.543 GB | very small, high quality loss |
68
+ | [EZO-Llama-3.2-3B-Instruct-dpoE-Q3_K_M.gguf](https://huggingface.co/tensorblock/EZO-Llama-3.2-3B-Instruct-dpoE-GGUF/blob/main/EZO-Llama-3.2-3B-Instruct-dpoE-Q3_K_M.gguf) | Q3_K_M | 1.687 GB | very small, high quality loss |
69
+ | [EZO-Llama-3.2-3B-Instruct-dpoE-Q3_K_L.gguf](https://huggingface.co/tensorblock/EZO-Llama-3.2-3B-Instruct-dpoE-GGUF/blob/main/EZO-Llama-3.2-3B-Instruct-dpoE-Q3_K_L.gguf) | Q3_K_L | 1.815 GB | small, substantial quality loss |
70
+ | [EZO-Llama-3.2-3B-Instruct-dpoE-Q4_0.gguf](https://huggingface.co/tensorblock/EZO-Llama-3.2-3B-Instruct-dpoE-GGUF/blob/main/EZO-Llama-3.2-3B-Instruct-dpoE-Q4_0.gguf) | Q4_0 | 1.917 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
71
+ | [EZO-Llama-3.2-3B-Instruct-dpoE-Q4_K_S.gguf](https://huggingface.co/tensorblock/EZO-Llama-3.2-3B-Instruct-dpoE-GGUF/blob/main/EZO-Llama-3.2-3B-Instruct-dpoE-Q4_K_S.gguf) | Q4_K_S | 1.928 GB | small, greater quality loss |
72
+ | [EZO-Llama-3.2-3B-Instruct-dpoE-Q4_K_M.gguf](https://huggingface.co/tensorblock/EZO-Llama-3.2-3B-Instruct-dpoE-GGUF/blob/main/EZO-Llama-3.2-3B-Instruct-dpoE-Q4_K_M.gguf) | Q4_K_M | 2.019 GB | medium, balanced quality - recommended |
73
+ | [EZO-Llama-3.2-3B-Instruct-dpoE-Q5_0.gguf](https://huggingface.co/tensorblock/EZO-Llama-3.2-3B-Instruct-dpoE-GGUF/blob/main/EZO-Llama-3.2-3B-Instruct-dpoE-Q5_0.gguf) | Q5_0 | 2.270 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
74
+ | [EZO-Llama-3.2-3B-Instruct-dpoE-Q5_K_S.gguf](https://huggingface.co/tensorblock/EZO-Llama-3.2-3B-Instruct-dpoE-GGUF/blob/main/EZO-Llama-3.2-3B-Instruct-dpoE-Q5_K_S.gguf) | Q5_K_S | 2.270 GB | large, low quality loss - recommended |
75
+ | [EZO-Llama-3.2-3B-Instruct-dpoE-Q5_K_M.gguf](https://huggingface.co/tensorblock/EZO-Llama-3.2-3B-Instruct-dpoE-GGUF/blob/main/EZO-Llama-3.2-3B-Instruct-dpoE-Q5_K_M.gguf) | Q5_K_M | 2.322 GB | large, very low quality loss - recommended |
76
+ | [EZO-Llama-3.2-3B-Instruct-dpoE-Q6_K.gguf](https://huggingface.co/tensorblock/EZO-Llama-3.2-3B-Instruct-dpoE-GGUF/blob/main/EZO-Llama-3.2-3B-Instruct-dpoE-Q6_K.gguf) | Q6_K | 2.644 GB | very large, extremely low quality loss |
77
+ | [EZO-Llama-3.2-3B-Instruct-dpoE-Q8_0.gguf](https://huggingface.co/tensorblock/EZO-Llama-3.2-3B-Instruct-dpoE-GGUF/blob/main/EZO-Llama-3.2-3B-Instruct-dpoE-Q8_0.gguf) | Q8_0 | 3.422 GB | very large, extremely low quality loss - not recommended |
78
+
79
+
80
+ ## Downloading instruction
81
+
82
+ ### Command line
83
+
84
+ Firstly, install Huggingface Client
85
+
86
+ ```shell
87
+ pip install -U "huggingface_hub[cli]"
88
+ ```
89
+
90
+ Then, downoad the individual model file the a local directory
91
+
92
+ ```shell
93
+ huggingface-cli download tensorblock/EZO-Llama-3.2-3B-Instruct-dpoE-GGUF --include "EZO-Llama-3.2-3B-Instruct-dpoE-Q2_K.gguf" --local-dir MY_LOCAL_DIR
94
+ ```
95
+
96
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
97
+
98
+ ```shell
99
+ huggingface-cli download tensorblock/EZO-Llama-3.2-3B-Instruct-dpoE-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
100
+ ```