Description

This is BigMistral-13b (a 13b Mistral base model, using a modified NeverSleep recipe) merged with a 512-rank LORA trained over it directly.

Logic

My 20b Llama 2 merges did well, in part due to the inclusion of Elieithyia-20b, trained on top of a 20b merge directly. This time, I trained the LoRA not with the traditional goals in mind, but with “healing” the 13b merge. This involved increasing the gradient accumulation steps significantly, lowering the learning rate, and decreasing the dropout. Fingers crossed!

Results

Prelimary results are an improvement. Really needs a bigger dataset of more cognitive tasks like OpenOrca. Writes okay stories and responds mostly factually. Idea is solid.

Dataset

This LORA was trained on a dataset that consists of teknium1’s roleplay-instruct-v2.1, and then part of the private Elieithyia dataset and of HF’s No Robots, chosen randomly to form a even (filesize) 3 way split.

Downloads last month
8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.