ryzen88 commited on
Commit
8fa334b
·
verified ·
1 Parent(s): 8651d5e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -54
README.md CHANGED
@@ -1,54 +1,54 @@
1
- ---
2
- base_model: []
3
- library_name: transformers
4
- tags:
5
- - mergekit
6
- - merge
7
-
8
- ---
9
- # model
10
-
11
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
12
-
13
- ## Merge Details
14
- ### Merge Method
15
-
16
- This model was merged using the breadcrumbs_ties merge method using Z:\peter\LLM's\Llama-3-Giraffe-70B-Instruct as a base.
17
-
18
- ### Models Merged
19
-
20
- The following models were included in the merge:
21
- * Z:\peter\LLM's\Smaug-Llama-3-70B-Instruct
22
- * I:\Llama-3-Lumimaid-70B-v0.1-alt
23
- * I:\Tess-2.0-Llama-3-70B-v0.2
24
-
25
- ### Configuration
26
-
27
- The following YAML configuration was used to produce this model:
28
-
29
- ```yaml
30
- models:
31
- - model: Z:\peter\LLM's\Llama-3-Giraffe-70B-Instruct
32
- parameters:
33
- weight: 0.25
34
- density: 0.90
35
- gamma: 0.01
36
- - model: Z:\peter\LLM's\Smaug-Llama-3-70B-Instruct
37
- parameters:
38
- weight: 0.30
39
- density: 0.90
40
- gamma: 0.01
41
- - model: I:\Tess-2.0-Llama-3-70B-v0.2
42
- parameters:
43
- weight: 0.15
44
- density: 0.90
45
- gamma: 0.01
46
- - model: I:\Llama-3-Lumimaid-70B-v0.1-alt
47
- parameters:
48
- weight: 0.30
49
- density: 0.90
50
- gamma: 0.01
51
- merge_method: breadcrumbs_ties
52
- base_model: Z:\peter\LLM's\Llama-3-Giraffe-70B-Instruct
53
- dtype: bfloat16
54
- ```
 
1
+ ---
2
+ base_model: []
3
+ library_name: transformers
4
+ tags:
5
+ - mergekit
6
+ - merge
7
+
8
+ ---
9
+ # model
10
+
11
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
12
+
13
+ ## Merge Details
14
+ ### Merge Method
15
+
16
+ This model was merged using the breadcrumbs_ties merge method using Z:\Llama-3-Giraffe-70B-Instruct as a base.
17
+
18
+ ### Models Merged
19
+
20
+ The following models were included in the merge:
21
+ * Z:\Smaug-Llama-3-70B-Instruct
22
+ * I:\Llama-3-Lumimaid-70B-v0.1-alt
23
+ * I:\Tess-2.0-Llama-3-70B-v0.2
24
+
25
+ ### Configuration
26
+
27
+ The following YAML configuration was used to produce this model:
28
+
29
+ ```yaml
30
+ models:
31
+ - model: Z:\LLM's\Llama-3-Giraffe-70B-Instruct
32
+ parameters:
33
+ weight: 0.25
34
+ density: 0.90
35
+ gamma: 0.01
36
+ - model: Z:\LLM's\Smaug-Llama-3-70B-Instruct
37
+ parameters:
38
+ weight: 0.30
39
+ density: 0.90
40
+ gamma: 0.01
41
+ - model: I:\Tess-2.0-Llama-3-70B-v0.2
42
+ parameters:
43
+ weight: 0.15
44
+ density: 0.90
45
+ gamma: 0.01
46
+ - model: I:\Llama-3-Lumimaid-70B-v0.1-alt
47
+ parameters:
48
+ weight: 0.30
49
+ density: 0.90
50
+ gamma: 0.01
51
+ merge_method: breadcrumbs_ties
52
+ base_model: Z:\LLM's\Llama-3-Giraffe-70B-Instruct
53
+ dtype: bfloat16
54
+ ```