athirdpath commited on
Commit
c1ad348
·
1 Parent(s): de93842

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -0
README.md ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+ ### Description
6
+
7
+ This is BigMistral-13b (A 13b Mistral base model based on a modified NeverSleep recipe.) merged with a 512-rank LORA trained over it directly.
8
+
9
+ ### Logic
10
+
11
+ My 20b Llama 2 merges did well, in part due to the inclusion of Elieithyia-20b, trained on top of a 20b merge directly. This time, I trained the LoRA not with the traditional goals in mind, but with “healing” the 13b merge. This involved increasing the gradient accumulation steps significantly, lowering the learning rate, and decreasing the dropout. Fingers crossed!
12
+
13
+ ### Results
14
+
15
+ Still training, coming soon.
16
+
17
+ ### Dataset
18
+
19
+ This LORA was trained on a dataset that consists of teknium1’s [roleplay-instruct-v2.1](https://github.com/teknium1/GPTeacher/blob/main/Roleplay%20Supplemental/roleplay-instruct-v2.1.json), and then part of the private Elieithyia dataset and of HF’s No Robots, chosen randomly to form a even (filesize) 3 way split.