MaziyarPanahi
commited on
adding EXL2 reference (#15)
Browse files- adding EXL2 reference (ea64b7df052e112c24df3b067fc14ad90d4ed7eb)
README.md
CHANGED
@@ -125,7 +125,14 @@ This model is an advanced iteration of the powerful `Qwen/Qwen2.5-72B`, specific
|
|
125 |
|
126 |
# ⚡ Quantized GGUF
|
127 |
|
128 |
-
Here are the GGUF models thanks to [
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
129 |
|
130 |
# 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
|
131 |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__calme-3.2-instruct-78b)
|
|
|
125 |
|
126 |
# ⚡ Quantized GGUF
|
127 |
|
128 |
+
Here are the GGUF models thanks to [bartowski](https://huggingface.co/bartowski): [calme-3.2-instruct-78b-GGUF](https://huggingface.co/bartowski/calme-3.2-instruct-78b-GGUF)
|
129 |
+
|
130 |
+
# ⚡ Quantized EXL2
|
131 |
+
|
132 |
+
Here is the EXL2 4.5 bits per weight (bpw) model thanks to [DavidCatalano](https://huggingface.co/DavidCatalano): [DavidCatalano/calme-3.2-instruct-78b-exl2](https://huggingface.co/DavidCatalano/calme-3.2-instruct-78b-exl2)
|
133 |
+
|
134 |
+
DavidCatalano/calme-3.2-instruct-78b-exl2-4.5bpw.
|
135 |
+
|
136 |
|
137 |
# 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
|
138 |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__calme-3.2-instruct-78b)
|