Update README.md
Browse files
README.md
CHANGED
@@ -11,6 +11,22 @@ model-index:
|
|
11 |
results: []
|
12 |
---
|
13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
![Kunou](https://huggingface.co/Sao10K/72B-Qwen2.5-Kunou-v1/resolve/main/knn.png)
|
15 |
|
16 |
**Sister Versions for Lightweight Use!**
|
|
|
11 |
results: []
|
12 |
---
|
13 |
|
14 |
+
Oh hell yeah here we go. I've never quanted a 32b model before so hopefully these turn out alright. I ended up making a few more quant types than usual to see about different tiers. Probably won't happen again.
|
15 |
+
|
16 |
+
The advice in my [14b Kunou model](https://huggingface.co/Statuo/Qwen-2.5-14b-Kunou-EXL2-8bpw) quants probably still holds true here.
|
17 |
+
|
18 |
+
[This is the 5bpw EXL2 quant of this model. For the original model, go here](https://huggingface.co/Sao10K/32B-Qwen2.5-Kunou-v1)
|
19 |
+
<br>
|
20 |
+
[For the 8bpw version, go here](https://huggingface.co/Statuo/Sao10k-Qwen2.5-32b-Kunou-EXL2-8bpw)
|
21 |
+
<br>
|
22 |
+
[For the 6bpw version, go here](https://huggingface.co/Statuo/Sao10k-Qwen2.5-32b-Kunou-EXL2-6bpw)
|
23 |
+
<br>
|
24 |
+
[For the 4.5bpw version, go here](https://huggingface.co/Statuo/Sao10k-Qwen2.5-32b-Kunou-EXL2-4.5bpw)
|
25 |
+
<br>
|
26 |
+
[For the 4bpw version, go here](https://huggingface.co/Statuo/Sao10k-Qwen2.5-32b-Kunou-EXL2-4bpw)
|
27 |
+
|
28 |
+
---
|
29 |
+
|
30 |
![Kunou](https://huggingface.co/Sao10K/72B-Qwen2.5-Kunou-v1/resolve/main/knn.png)
|
31 |
|
32 |
**Sister Versions for Lightweight Use!**
|