T145 commited on
Commit
dbaa382
·
verified ·
1 Parent(s): 580d200

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -57
README.md CHANGED
@@ -1,57 +1,61 @@
1
- ---
2
- base_model:
3
- - SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA
4
- - akjindal53244/Llama-3.1-Storm-8B
5
- - unsloth/Meta-Llama-3.1-8B-Instruct
6
- - arcee-ai/Llama-3.1-SuperNova-Lite
7
- library_name: transformers
8
- tags:
9
- - mergekit
10
- - merge
11
-
12
- ---
13
- # Untitled Model (1)
14
-
15
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
16
-
17
- ## Merge Details
18
- ### Merge Method
19
-
20
- This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) as a base.
21
-
22
- ### Models Merged
23
-
24
- The following models were included in the merge:
25
- * [SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA)
26
- * [akjindal53244/Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B)
27
- * [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite)
28
-
29
- ### Configuration
30
-
31
- The following YAML configuration was used to produce this model:
32
-
33
- ```yaml
34
- base_model: unsloth/Meta-Llama-3.1-8B-Instruct
35
- dtype: bfloat16
36
- merge_method: dare_ties
37
- slices:
38
- - sources:
39
- - layer_range: [0, 32]
40
- model: akjindal53244/Llama-3.1-Storm-8B
41
- parameters:
42
- density: 1.0
43
- weight: 0.25
44
- - layer_range: [0, 32]
45
- model: arcee-ai/Llama-3.1-SuperNova-Lite
46
- parameters:
47
- density: 1.0
48
- weight: 0.33
49
- - layer_range: [0, 32]
50
- model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA
51
- parameters:
52
- density: 1.0
53
- weight: 0.42
54
- - layer_range: [0, 32]
55
- model: unsloth/Meta-Llama-3.1-8B-Instruct
56
- tokenizer_source: base
57
- ```
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA
4
+ - akjindal53244/Llama-3.1-Storm-8B
5
+ - unsloth/Meta-Llama-3.1-8B-Instruct
6
+ - arcee-ai/Llama-3.1-SuperNova-Lite
7
+ library_name: transformers
8
+ tags:
9
+ - mergekit
10
+ - merge
11
+ - function calling
12
+ - roleplay
13
+ - conversational
14
+ license: llama3.1
15
+ ---
16
+ # ZEUS 8B 🌩️ V7
17
+
18
+ This merge seeks to improve upon the successful [V2 model](https://huggingface.co/T145/ZEUS-8B-V2) by using a more uncensored Llama 3.1 model over Lexi, and increasing the density to `1.0` from `0.8`.
19
+ Merges with higher densities have shown consistent improvement, and an earlier Evolve Merge test showed that the best density with this model configuration was at `1.0`.
20
+
21
+ ## Merge Details
22
+ ### Merge Method
23
+
24
+ This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) as a base.
25
+
26
+ ### Models Merged
27
+
28
+ The following models were included in the merge:
29
+ * [SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA)
30
+ * [akjindal53244/Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B)
31
+ * [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite)
32
+
33
+ ### Configuration
34
+
35
+ The following YAML configuration was used to produce this model:
36
+
37
+ ```yaml
38
+ base_model: unsloth/Meta-Llama-3.1-8B-Instruct
39
+ dtype: bfloat16
40
+ merge_method: dare_ties
41
+ slices:
42
+ - sources:
43
+ - layer_range: [0, 32]
44
+ model: akjindal53244/Llama-3.1-Storm-8B
45
+ parameters:
46
+ density: 1.0
47
+ weight: 0.25
48
+ - layer_range: [0, 32]
49
+ model: arcee-ai/Llama-3.1-SuperNova-Lite
50
+ parameters:
51
+ density: 1.0
52
+ weight: 0.33
53
+ - layer_range: [0, 32]
54
+ model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA
55
+ parameters:
56
+ density: 1.0
57
+ weight: 0.42
58
+ - layer_range: [0, 32]
59
+ model: unsloth/Meta-Llama-3.1-8B-Instruct
60
+ tokenizer_source: base
61
+ ```