File size: 1,575 Bytes
b37310d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
---
license: llama2
language:
  - en
tags:
  - mistral
library_name: transformers
pipeline_tag: text-generation
mergekit:
  - Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp
  - uukuguy/speechless-mistral-six-in-one-7b
datasets:
  - stingning/ultrachat
  - garage-bAInd/Open-Platypus
  - Open-Orca/OpenOrca
  - TIGER-Lab/MathInstruct
  - OpenAssistant/oasst_top1_2023-08-25
  - teknium/openhermes
  - meta-math/MetaMathQA
  - Open-Orca/SlimOrca

---

<p align="center">
  <img src="https://codeberg.org/aninokuma/DeydooAssistant/raw/branch/main/logo.webp" height="256px" alt="SynthIQ">
</p>

# SynthIQ

This is SynthIQ, rated 92.23/100 by GPT-4 across varied complex prompts. I used [mergekit](https://github.com/cg123/mergekit) to merge models.


# Yaml Config

```yaml

slices:
  - sources:
      - model: Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp
        layer_range: [0, 32]
      - model: uukuguy/speechless-mistral-six-in-one-7b
        layer_range: [0, 32]

merge_method: slerp
base_model: mistralai/Mistral-7B-v0.1

parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5 # fallback for rest of tensors
tokenizer_source: union

dtype: bfloat16

```

<!-- prompt-template start -->
## Prompt template: ChatML

```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

```

<!-- prompt-template end -->

License is LLama2 license as uukuguy/speechless-mistral-six-in-one-7b is llama2 license.