File size: 971 Bytes
7bc558a
 
 
79c4a75
 
 
3a03b2a
79c4a75
75330f6
68696f3
 
 
75330f6
 
68696f3
ac399ec
 
0b3b0ef
ac399ec
4e3d007
0b3b0ef
 
 
 
 
ac399ec
 
 
 
 
0b3b0ef
ac399ec
4e3d007
79c4a75
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
---
license: llama2
---

<h1>Kaori-34B-v1  Model Card</h1>

This Model was Finetuned By Kaeri and Jenti.

<h3>Datasets Strategy</h3>

 - Open-Platypus
 - Dolphin

We trained the model with 100% Open-Platypus data and 5% Dolphin data and applied SFT strategy.

We did not use GSM8k samples when generating data. 
Also we were careful of data contamination by similarity filtering 
the training data if the data correspond to any of the following list.

<pre>
filtering_tasks = [
    'cot_gsm8k',
    'cot_gsm8k_ii',
    'drop:2.0.0',
    'winogrande:1.1.0'
    'task228_arc_answer_generation_easy',
    'ai2_arc/ARC-Challenge:1.0.0',
    'ai2_arc/ARC-Easy:1.0.0',
    'task229_arc_answer_generation_hard',
    'hellaswag:1.1.0', 
    'task1389_hellaswag_completion'
]
</pre>


<h3>Framework:</h3>

 - https://github.com/hiyouga/LLaMA-Factory


<h3>Parameters:</h3>

 - Finetune_Type  :	 LoRA
 - GPUs           :  A100x4(80GB)
 - Epochs         :  3
 - Batchsize      :  8