athirdpath's picture
Update README.md
cae1fbf
|
raw
history blame
948 Bytes
metadata
license: apache-2.0

Description

This is BigMistral-13b (a 13b Mistral base model, using a modified NeverSleep recipe) merged with a 512-rank LORA trained over it directly.

Logic

My 20b Llama 2 merges did well, in part due to the inclusion of Elieithyia-20b, trained on top of a 20b merge directly. This time, I trained the LoRA not with the traditional goals in mind, but with “healing” the 13b merge. This involved increasing the gradient accumulation steps significantly, lowering the learning rate, and decreasing the dropout. Fingers crossed!

Results

Still training, coming soon.

Dataset

This LORA was trained on a dataset that consists of teknium1’s roleplay-instruct-v2.1, and then part of the private Elieithyia dataset and of HF’s No Robots, chosen randomly to form a even (filesize) 3 way split.