<<<<<<< HEAD ## Usage NebulaNet-v2: An MOE of 4 7b expert models. It is good at coding and multi language translation. It should be fluent at chat and math too. ## mergekit config ``` base_model: ContextualAI/Contextual_KTO_Mistral_PairRM experts: - source_model: ContextualAI/Contextual_KTO_Mistral_PairRM positive_prompts: - "chat" - "assistant" - "tell me" - "explain" - "I want" - source_model: Nexusflow/Starling-LM-7B-beta positive_prompts: - "code" - "python" - "javascript" - "programming" - "algorithm" - source_model: snorkelai/Snorkel-Mistral-PairRM-DPO positive_prompts: - "" - source_model: mlabonne/NeuralDaredevil-7B positive_prompts: - "reason" - "math" - "mathematics" - "solve" - "count" ``` ======= --- license: mit --- >>>>>>> a796067 (initial commit)