Update README.md
Browse files
README.md
CHANGED
@@ -3,34 +3,6 @@ license: apache-2.0
|
|
3 |
tags:
|
4 |
- merge
|
5 |
- mergekit
|
6 |
-
- lazymergekit
|
7 |
-
- meta-math/MetaMath-Mistral-7B
|
8 |
-
- openchat/openchat-3.5-1210
|
9 |
---
|
10 |
|
11 |
-
|
12 |
-
|
13 |
-
evo_exp-point-1-3 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
|
14 |
-
* [meta-math/MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B)
|
15 |
-
* [openchat/openchat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210)
|
16 |
-
|
17 |
-
## 🧩 Configuration
|
18 |
-
|
19 |
-
```yaml
|
20 |
-
slices:
|
21 |
-
- sources:
|
22 |
-
- model: meta-math/MetaMath-Mistral-7B
|
23 |
-
layer_range: [0, 32]
|
24 |
-
- model: openchat/openchat-3.5-1210
|
25 |
-
layer_range: [0, 32]
|
26 |
-
merge_method: slerp
|
27 |
-
base_model: meta-math/MetaMath-Mistral-7B
|
28 |
-
parameters:
|
29 |
-
t:
|
30 |
-
- filter: self_attn
|
31 |
-
value: [0, 0.5, 0.3, 0.7, 1]
|
32 |
-
- filter: mlp
|
33 |
-
value: [1, 0.5, 0.7, 0.3, 0]
|
34 |
-
- value: 0.5
|
35 |
-
dtype: bfloat16
|
36 |
-
```
|
|
|
3 |
tags:
|
4 |
- merge
|
5 |
- mergekit
|
|
|
|
|
|
|
6 |
---
|
7 |
|
8 |
+
This is an open model for iterative merging experiments.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|