ockerman0's picture
Update README.md
d9f5271 verified
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# EVA-Tissint-v1.1-14B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
New merge of [EVA v0.2](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2), now using [Tissint v1.1](https://huggingface.co/Ttimofeyka/Tissint-14B-v1.1-128k-RP). Hopefully the new version of Tissint and slight change of merge settings make an improvement over the first iteration.
I recommend the samplers provided on Tissint's model card.
If you'd like to use XTC, I recommend a threshold of 0.2. Lower thresholds seem to adversely affect the coherency.
# Quantisations
Static: https://huggingface.co/mradermacher/EVA-Tissint-v1.1-14B-GGUF
Imatrix: https://huggingface.co/mradermacher/EVA-Tissint-v1.1-14B-i1-GGUF
### Merge Method
This model was merged using the della_linear merge method using EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2 as a base.
### Models Merged
The following models were included in the merge:
* Ttimofeyka/Tissint-14B-v1.1-128k-RP
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Ttimofeyka/Tissint-14B-v1.1-128k-RP
parameters:
density: 0.45
weight: 0.3
- model: EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2
parameters:
density: 0.55
weight: 0.7
merge_method: della_linear
base_model: EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2
parameters:
epsilon: 0.05
lambda: 1
dtype: bfloat16
```