johnsutor's picture
Upload folder using huggingface_hub
49803ce verified
|
raw
history blame
2.08 kB
---
base_model:
- DeepMount00/Llama-3-8b-Ita
- nbeerbower/llama-3-gutenberg-8B
- failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
- shenzhi-wang/Llama3-8B-Chinese-Chat
- VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
- meta-llama/Meta-Llama-3-8B-Instruct
library_name: transformers
tags:
- mergekit
- merge
---
# ties
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita)
* [nbeerbower/llama-3-gutenberg-8B](https://huggingface.co/nbeerbower/llama-3-gutenberg-8B)
* [failspy/Meta-Llama-3-8B-Instruct-abliterated-v3](https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3)
* [shenzhi-wang/Llama3-8B-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat)
* [VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
density: 0.5
weight: 1.0
- model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
parameters:
density: 0.5
weight: 1.0
- model: VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
parameters:
density: 0.5
weight: 1.0
- model: DeepMount00/Llama-3-8b-Ita
parameters:
density: 0.5
weight: 1.0
- model: nbeerbower/llama-3-gutenberg-8B
parameters:
density: 0.5
weight: 1.0
- model: shenzhi-wang/Llama3-8B-Chinese-Chat
parameters:
density: 0.5
weight: 1.0
merge_method: ties
base_model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
int8_mask: true
dtype: bfloat16
```