File size: 2,708 Bytes
412cc2d a3740bf 412cc2d a3740bf 412cc2d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 |
---
license: other
inference: false
language:
- en
pipeline_tag: text-generation
tags:
- transformers
- gguf
- imatrix
- WestIceLemonTeaRP-32k-7b
- icefog72
---
Quantizations of https://huggingface.co/icefog72/WestIceLemonTeaRP-32k-7b
# From original readme
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
Prompt template: Alpaca, maybe ChatML
* measurement.json for quanting exl2 included.
- [4.2bpw-exl2](https://huggingface.co/icefog72/WestIceLemonTeaRP-32k-7b-4.2bpw-exl2)
- [6.5bpw-exl2](https://huggingface.co/icefog72/WestIceLemonTeaRP-32k-7b-6.5bpw-exl2)
- [8bpw-exl2](https://huggingface.co/icefog72/WestIceLemonTeaRP-32k-7b-8bpw-exl2)
thx mradermacher and SilverFan for
* [mradermacher/WestIceLemonTeaRP-32k-GGUF](https://huggingface.co/mradermacher/WestIceLemonTeaRP-32k-GGUF)
* [SilverFan/WestIceLemonTeaRP-7b-32k-GGUF](https://huggingface.co/SilverFan/WestIceLemonTeaRP-7b-32k-GGUF)
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [IceLemonTeaRP-32k-7b](https://huggingface.co/icefog72/IceLemonTeaRP-32k-7b)
* WestWizardIceLemonTeaRP
* [SeverusWestLake-7B-DPO](https://huggingface.co/s3nh/SeverusWestLake-7B-DPO)
* WizardIceLemonTeaRP
* [Not-WizardLM-2-7B](https://huggingface.co/amazingvince/Not-WizardLM-2-7B)
* [IceLemonTeaRP-32k-7b](https://huggingface.co/icefog72/IceLemonTeaRP-32k-7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: IceLemonTeaRP-32k-7b
layer_range: [0, 32]
- model: WestWizardIceLemonTeaRP
layer_range: [0, 32]
merge_method: slerp
base_model: IceLemonTeaRP-32k-7b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
```
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63407b719dbfe0d48b2d763b/GX-kV-H8_zAJz5hHL8A7G.png)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_icefog72__WestIceLemonTeaRP-32k-7b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.27|
|AI2 Reasoning Challenge (25-Shot)|68.77|
|HellaSwag (10-Shot) |86.89|
|MMLU (5-Shot) |64.28|
|TruthfulQA (0-shot) |62.47|
|Winogrande (5-shot) |80.98|
|GSM8k (5-shot) |64.22|
|