--- license: llama2 language: - en tags: - mistral library_name: transformers pipeline_tag: text-generation mergekit: - Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp - uukuguy/speechless-mistral-six-in-one-7b datasets: - stingning/ultrachat - garage-bAInd/Open-Platypus - Open-Orca/OpenOrca - TIGER-Lab/MathInstruct - OpenAssistant/oasst_top1_2023-08-25 - teknium/openhermes - meta-math/MetaMathQA - Open-Orca/SlimOrca ---

SynthIQ

# SynthIQ This is SynthIQ, rated 92.23/100 by GPT-4 across varied complex prompts. I used [mergekit](https://github.com/cg123/mergekit) to merge models. # Yaml Config ```yaml slices: - sources: - model: Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp layer_range: [0, 32] - model: uukuguy/speechless-mistral-six-in-one-7b layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-v0.1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 # fallback for rest of tensors tokenizer_source: union dtype: bfloat16 ``` ## Prompt template: ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` License is LLama2 license as uukuguy/speechless-mistral-six-in-one-7b is llama2 license.