Kaori-34B-v1 / README.md
KaeriJenti's picture
Update README.md
0b3b0ef
|
raw
history blame
873 Bytes
---
license: llama2
---
<h1>Kaori-34B-v1 Model Card</h1>
This Model Finetuned By Kaeri and Jenti.
<h3>Datasets</h3>
- Open-Platypus
- Dolphin
- OpenOrca
We did not use GSM8k samples when generating data.
Also we were careful of data contamination by similarity filtering
the training data if the data correspond to any of the following list.
<pre>
filtering_tasks = [
'cot_gsm8k',
'cot_gsm8k_ii',
'drop:2.0.0',
'winogrande:1.1.0'
'task228_arc_answer_generation_easy',
'ai2_arc/ARC-Challenge:1.0.0',
'ai2_arc/ARC-Easy:1.0.0',
'task229_arc_answer_generation_hard',
'hellaswag:1.1.0',
'task1389_hellaswag_completion'
]
</pre>
<h3>Framework:</h3>
- https://github.com/hiyouga/LLaMA-Factory
<h3>Parameters:</h3>
- Finetune_Type : LoRA
- GPUs : A100x4(80GB)
- Epochs : 3
- Batchsize : 8