Delta-Vector's picture
Upload folder using huggingface_hub
b5337b1 verified
|
raw
history blame
1.7 kB
---
base_model:
- Orion-zhen/Qwen2.5-14B-Instruct-Uncensored
- allura-org/TQ2.5-14B-Sugarquill-v1
- EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2
- Qwen/Qwen2.5-14B-Instruct
- Orion-zhen/Meissa-Qwen2.5-14B-Instruct
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the della_linear merge method using [Orion-zhen/Qwen2.5-14B-Instruct-Uncensored](https://huggingface.co/Orion-zhen/Qwen2.5-14B-Instruct-Uncensored) as a base.
### Models Merged
The following models were included in the merge:
* [allura-org/TQ2.5-14B-Sugarquill-v1](https://huggingface.co/allura-org/TQ2.5-14B-Sugarquill-v1)
* [EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2)
* [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
* [Orion-zhen/Meissa-Qwen2.5-14B-Instruct](https://huggingface.co/Orion-zhen/Meissa-Qwen2.5-14B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Qwen/Qwen2.5-14B-Instruct
parameters:
weight: 0.1
density: 0.4
- model: Orion-zhen/Meissa-Qwen2.5-14B-Instruct
parameters:
weight: 0.12
density: 0.5
- model: EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2
parameters:
weight: 0.2
density: 0.6
- model: allura-org/TQ2.5-14B-Sugarquill-v1
parameters:
weight: 0.45
density: 0.7
merge_method: della_linear
base_model: Orion-zhen/Qwen2.5-14B-Instruct-Uncensored
parameters:
epsilon: 0.05
lambda: 1
dtype: bfloat16
tokenizer_source: base
```