Add quant links
Browse files
README.md
CHANGED
@@ -93,7 +93,11 @@ A quick overview of the model's strengths include:
|
|
93 |
## Trained on Benchmarks?
|
94 |
Well, yes, but actually no. You may see the names of benchmarks in the datasets used, however only **train** splits were used. If you don't know the difference, please learn.
|
95 |
|
96 |
-
##
|
|
|
|
|
|
|
|
|
97 |
|
98 |
- [Meta AI](https://llama.meta.com/llama3/): This model would never have been possible if Meta AI did not release Llama 3 with an open license. We thank them deeply for making frontier LLMs available for all.
|
99 |
- [Jon Durbin](https://huggingface.co/jondurbin): We've used many of his datasets to train this model, specifically `airoboros-3.2`, `contextual-dpo-v0.1`, `gutenberg-dpo-v0.1`, `py-dpo-v0.1`, `truthy-dpo-v0.1`, `cinematika-v0.1`, `gutenberg-dpo-v0.1`. His work is amazing and we thank him a lot. We've used a lot of datasets for our model that he used for his `bagel` series of models too. If you couldn't already guess, this model is essentially a `bagel` model but with our custom datasets and RLAIF methodology added in.
|
|
|
93 |
## Trained on Benchmarks?
|
94 |
Well, yes, but actually no. You may see the names of benchmarks in the datasets used, however only **train** splits were used. If you don't know the difference, please learn.
|
95 |
|
96 |
+
## Quants and Other Formats
|
97 |
+
- GGUFs: [https://huggingface.co/darkcloudai/huskylm-2.5-8b-GGUF](https://huggingface.co/darkcloudai/huskylm-2.5-8b-GGUF)
|
98 |
+
- AWQ (bits: 4, gs: 128, version: gemm): [https://huggingface.co/darkcloudai/huskylm-2.5-8b-AWQ](https://huggingface.co/darkcloudai/huskylm-2.5-8b-AWQ)
|
99 |
+
|
100 |
+
## Huge Thank You to the Following People/Companies
|
101 |
|
102 |
- [Meta AI](https://llama.meta.com/llama3/): This model would never have been possible if Meta AI did not release Llama 3 with an open license. We thank them deeply for making frontier LLMs available for all.
|
103 |
- [Jon Durbin](https://huggingface.co/jondurbin): We've used many of his datasets to train this model, specifically `airoboros-3.2`, `contextual-dpo-v0.1`, `gutenberg-dpo-v0.1`, `py-dpo-v0.1`, `truthy-dpo-v0.1`, `cinematika-v0.1`, `gutenberg-dpo-v0.1`. His work is amazing and we thank him a lot. We've used a lot of datasets for our model that he used for his `bagel` series of models too. If you couldn't already guess, this model is essentially a `bagel` model but with our custom datasets and RLAIF methodology added in.
|