morriszms commited on
Commit
6367fb0
·
verified ·
1 Parent(s): f50039d

Upload folder using huggingface_hub

Browse files
Qwen2.5-1.5B-Instruct-Q2_K.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a261458c28d1cfecbf1b840fff6c161f55c43525c8f4ffc3ceaf1ec034181613
3
- size 676304896
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:592ca957d2611c9f2f09e49f58e0efcd5014af097051a21ae90a487655232f0e
3
+ size 676304832
Qwen2.5-1.5B-Instruct-Q3_K_L.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8427a7178b84e49b47ae346c2ba8e1dd1c0be47a2e2369a85f05b30fc771d5ad
3
- size 880162816
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ad34ea22ab2f570fa85fb9dbb9a4459460549134a015e93467b122b1ea72b75
3
+ size 880162752
Qwen2.5-1.5B-Instruct-Q3_K_M.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:db4550cff79d0c564f32fb0da97bae954611c47b208c13fd0ce4310092496a14
3
- size 824178688
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0faf3f8e4ac66dd9a02e91bdbeb8534ac033da82ee0775241558a03cf5039a3f
3
+ size 824178624
Qwen2.5-1.5B-Instruct-Q3_K_S.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:400a9834024d2df9300bb4ce750efe277035298c4d1a1a73f9b60ace1b033717
3
- size 760944640
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf603322df91ef82fd78fe5dc3a8daeb364f8661fc5efe20b571a3013c92c12d
3
+ size 760944576
Qwen2.5-1.5B-Instruct-Q4_0.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:89e522491b59eec4b2344282647b1270b7736a018fa6125ebe7fa7f67f2eb332
3
- size 934955008
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b46378a38c14250f322682f6eb8ff1147e13c97e43eaf13dac2a115ecadb1639
3
+ size 934954944
Qwen2.5-1.5B-Instruct-Q4_K_M.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4925db42d0e522ff16d853f1a34f6fce3456821fedae4b4aa46f18f043134aa9
3
- size 986048512
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4643c079a434d38ce7472577373104ee0851bb892786d4684f22f25c89acc167
3
+ size 986048448
Qwen2.5-1.5B-Instruct-Q4_K_S.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6f49e37fed4b77858c24260639dfde50c64795910df14d9239414765aa0b3956
3
- size 940312576
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f34dbbefdd5e01831587f13114cbb6ef127aefa5a127d22bd9ccf2e6d2a0a96
3
+ size 940312512
Qwen2.5-1.5B-Instruct-Q5_0.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:04a0e10703fd91c80d1dff374fea060958c8986ae347fb1f7de437da67b4353f
3
- size 1098729472
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eecff63427383049ce9c87d8dbbd439e72859f86b87eb1b83bce3272b297ab16
3
+ size 1098729408
Qwen2.5-1.5B-Instruct-Q5_K_M.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:28cf3167aebfd85b342a90434f3e6cff597c6f19f8059cc97d9b8a9c5b521ebc
3
- size 1125050368
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bbaf82e161b6579cd307dc2720763ceb35a6e74c6ebe625418b292b5ea13f6d2
3
+ size 1125050304
Qwen2.5-1.5B-Instruct-Q5_K_S.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5068e0eee94cbe9fc6f5b718af249c99cc333ae7982f016161196e61a92af804
3
- size 1098729472
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b3c534efac94d3c677ce513b722af3b91cbf51302fb391fddd0fc59a2f8544cf
3
+ size 1098729408
Qwen2.5-1.5B-Instruct-Q6_K.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:628a9402c73c66e143251ea64e06c8103940b13b06236f7c28512e0cd1994254
3
- size 1272739840
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b6c031fb09af17129aa73bd7c9f1542f86411375f6340dee831dbac28ea28291
3
+ size 1272739776
Qwen2.5-1.5B-Instruct-Q8_0.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:acff5f5be570aebc823f2d931a8d2d23d8c77bd9970e90eb311d3c90212187d8
3
- size 1646573056
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5542aabf41e558ac36fcb56b8d00919607ed2227de376d435ecad04834264f3f
3
+ size 1646572992
README.md CHANGED
@@ -1,15 +1,14 @@
1
  ---
2
- license: apache-2.0
3
- license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct/blob/main/LICENSE
4
  language:
5
  - en
6
- pipeline_tag: text-generation
7
- base_model: Qwen/Qwen2.5-1.5B-Instruct
8
  tags:
9
- - chat
 
10
  - TensorBlock
11
  - GGUF
12
- library_name: transformers
13
  ---
14
 
15
  <div style="width: auto; margin-left: auto; margin-right: auto">
@@ -23,13 +22,12 @@ library_name: transformers
23
  </div>
24
  </div>
25
 
26
- ## Qwen/Qwen2.5-1.5B-Instruct - GGUF
27
 
28
- This repo contains GGUF format model files for [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
29
 
30
  The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
31
 
32
-
33
  <div style="text-align: left; margin: 20px 0;">
34
  <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
35
  Run them on the TensorBlock client using your local machine ↗
@@ -38,7 +36,6 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
38
 
39
  ## Prompt template
40
 
41
-
42
  ```
43
  <|im_start|>system
44
  {system_prompt}<|im_end|>
@@ -51,18 +48,18 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
51
 
52
  | Filename | Quant type | File Size | Description |
53
  | -------- | ---------- | --------- | ----------- |
54
- | [Qwen2.5-1.5B-Instruct-Q2_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-Instruct-GGUF/blob/main/Qwen2.5-1.5B-Instruct-Q2_K.gguf) | Q2_K | 0.630 GB | smallest, significant quality loss - not recommended for most purposes |
55
- | [Qwen2.5-1.5B-Instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-Instruct-GGUF/blob/main/Qwen2.5-1.5B-Instruct-Q3_K_S.gguf) | Q3_K_S | 0.709 GB | very small, high quality loss |
56
- | [Qwen2.5-1.5B-Instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-Instruct-GGUF/blob/main/Qwen2.5-1.5B-Instruct-Q3_K_M.gguf) | Q3_K_M | 0.768 GB | very small, high quality loss |
57
- | [Qwen2.5-1.5B-Instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-Instruct-GGUF/blob/main/Qwen2.5-1.5B-Instruct-Q3_K_L.gguf) | Q3_K_L | 0.820 GB | small, substantial quality loss |
58
- | [Qwen2.5-1.5B-Instruct-Q4_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-Instruct-GGUF/blob/main/Qwen2.5-1.5B-Instruct-Q4_0.gguf) | Q4_0 | 0.871 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
59
- | [Qwen2.5-1.5B-Instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-Instruct-GGUF/blob/main/Qwen2.5-1.5B-Instruct-Q4_K_S.gguf) | Q4_K_S | 0.876 GB | small, greater quality loss |
60
- | [Qwen2.5-1.5B-Instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-Instruct-GGUF/blob/main/Qwen2.5-1.5B-Instruct-Q4_K_M.gguf) | Q4_K_M | 0.918 GB | medium, balanced quality - recommended |
61
- | [Qwen2.5-1.5B-Instruct-Q5_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-Instruct-GGUF/blob/main/Qwen2.5-1.5B-Instruct-Q5_0.gguf) | Q5_0 | 1.023 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
62
- | [Qwen2.5-1.5B-Instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-Instruct-GGUF/blob/main/Qwen2.5-1.5B-Instruct-Q5_K_S.gguf) | Q5_K_S | 1.023 GB | large, low quality loss - recommended |
63
- | [Qwen2.5-1.5B-Instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-Instruct-GGUF/blob/main/Qwen2.5-1.5B-Instruct-Q5_K_M.gguf) | Q5_K_M | 1.048 GB | large, very low quality loss - recommended |
64
- | [Qwen2.5-1.5B-Instruct-Q6_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-Instruct-GGUF/blob/main/Qwen2.5-1.5B-Instruct-Q6_K.gguf) | Q6_K | 1.185 GB | very large, extremely low quality loss |
65
- | [Qwen2.5-1.5B-Instruct-Q8_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-Instruct-GGUF/blob/main/Qwen2.5-1.5B-Instruct-Q8_0.gguf) | Q8_0 | 1.533 GB | very large, extremely low quality loss - not recommended |
66
 
67
 
68
  ## Downloading instruction
 
1
  ---
2
+ base_model: unsloth/Qwen2.5-1.5B-Instruct
 
3
  language:
4
  - en
5
+ library_name: transformers
6
+ license: apache-2.0
7
  tags:
8
+ - unsloth
9
+ - transformers
10
  - TensorBlock
11
  - GGUF
 
12
  ---
13
 
14
  <div style="width: auto; margin-left: auto; margin-right: auto">
 
22
  </div>
23
  </div>
24
 
25
+ ## unsloth/Qwen2.5-1.5B-Instruct - GGUF
26
 
27
+ This repo contains GGUF format model files for [unsloth/Qwen2.5-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-1.5B-Instruct).
28
 
29
  The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
30
 
 
31
  <div style="text-align: left; margin: 20px 0;">
32
  <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
33
  Run them on the TensorBlock client using your local machine ↗
 
36
 
37
  ## Prompt template
38
 
 
39
  ```
40
  <|im_start|>system
41
  {system_prompt}<|im_end|>
 
48
 
49
  | Filename | Quant type | File Size | Description |
50
  | -------- | ---------- | --------- | ----------- |
51
+ | [Qwen2.5-1.5B-Instruct-Q2_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-Instruct-GGUF/blob/main/Qwen2.5-1.5B-Instruct-Q2_K.gguf) | Q2_K | 0.676 GB | smallest, significant quality loss - not recommended for most purposes |
52
+ | [Qwen2.5-1.5B-Instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-Instruct-GGUF/blob/main/Qwen2.5-1.5B-Instruct-Q3_K_S.gguf) | Q3_K_S | 0.761 GB | very small, high quality loss |
53
+ | [Qwen2.5-1.5B-Instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-Instruct-GGUF/blob/main/Qwen2.5-1.5B-Instruct-Q3_K_M.gguf) | Q3_K_M | 0.824 GB | very small, high quality loss |
54
+ | [Qwen2.5-1.5B-Instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-Instruct-GGUF/blob/main/Qwen2.5-1.5B-Instruct-Q3_K_L.gguf) | Q3_K_L | 0.880 GB | small, substantial quality loss |
55
+ | [Qwen2.5-1.5B-Instruct-Q4_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-Instruct-GGUF/blob/main/Qwen2.5-1.5B-Instruct-Q4_0.gguf) | Q4_0 | 0.935 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
56
+ | [Qwen2.5-1.5B-Instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-Instruct-GGUF/blob/main/Qwen2.5-1.5B-Instruct-Q4_K_S.gguf) | Q4_K_S | 0.940 GB | small, greater quality loss |
57
+ | [Qwen2.5-1.5B-Instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-Instruct-GGUF/blob/main/Qwen2.5-1.5B-Instruct-Q4_K_M.gguf) | Q4_K_M | 0.986 GB | medium, balanced quality - recommended |
58
+ | [Qwen2.5-1.5B-Instruct-Q5_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-Instruct-GGUF/blob/main/Qwen2.5-1.5B-Instruct-Q5_0.gguf) | Q5_0 | 1.099 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
59
+ | [Qwen2.5-1.5B-Instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-Instruct-GGUF/blob/main/Qwen2.5-1.5B-Instruct-Q5_K_S.gguf) | Q5_K_S | 1.099 GB | large, low quality loss - recommended |
60
+ | [Qwen2.5-1.5B-Instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-Instruct-GGUF/blob/main/Qwen2.5-1.5B-Instruct-Q5_K_M.gguf) | Q5_K_M | 1.125 GB | large, very low quality loss - recommended |
61
+ | [Qwen2.5-1.5B-Instruct-Q6_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-Instruct-GGUF/blob/main/Qwen2.5-1.5B-Instruct-Q6_K.gguf) | Q6_K | 1.273 GB | very large, extremely low quality loss |
62
+ | [Qwen2.5-1.5B-Instruct-Q8_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-1.5B-Instruct-GGUF/blob/main/Qwen2.5-1.5B-Instruct-Q8_0.gguf) | Q8_0 | 1.647 GB | very large, extremely low quality loss - not recommended |
63
 
64
 
65
  ## Downloading instruction