Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,160 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: llama3
|
3 |
+
base_model: maywell/Llama-3-Ko-Luxia-Instruct
|
4 |
+
tags:
|
5 |
+
- generated_from_trainer
|
6 |
+
model-index:
|
7 |
+
- name: data/output/1min-v2-luxia-8b
|
8 |
+
results: []
|
9 |
+
---
|
10 |
+
|
11 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
12 |
+
should probably proofread and complete it, then remove this comment. -->
|
13 |
+
|
14 |
+
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
15 |
+
<details><summary>See axolotl config</summary>
|
16 |
+
|
17 |
+
axolotl version: `0.4.0`
|
18 |
+
```yaml
|
19 |
+
base_model: maywell/Llama-3-Ko-Luxia-Instruct
|
20 |
+
trust_remote_code: true
|
21 |
+
load_in_8bit: false
|
22 |
+
load_in_4bit: false
|
23 |
+
strict: false
|
24 |
+
datasets:
|
25 |
+
- path: "../data/generated_ds.json"
|
26 |
+
type: alpaca
|
27 |
+
conversation: chatml
|
28 |
+
dataset_prepared_path: ../data/dataset_v2_pre
|
29 |
+
val_set_size: 0.05
|
30 |
+
output_dir: ../data/output/1min-v2-luxia-8b
|
31 |
+
sequence_len: 1024
|
32 |
+
sample_packing: true
|
33 |
+
pad_to_sequence_len: true
|
34 |
+
eval_sample_packing: false
|
35 |
+
wandb_project:
|
36 |
+
wandb_entity:
|
37 |
+
wandb_watch:
|
38 |
+
wandb_name:
|
39 |
+
wandb_log_model:
|
40 |
+
gradient_accumulation_steps: 4
|
41 |
+
micro_batch_size: 1
|
42 |
+
num_epochs: 10
|
43 |
+
optimizer: adamw_bnb_8bit
|
44 |
+
lr_scheduler: cosine
|
45 |
+
learning_rate: 2e-6
|
46 |
+
train_on_inputs: false
|
47 |
+
group_by_length: false
|
48 |
+
bf16: auto
|
49 |
+
fp16: null
|
50 |
+
tf32: false
|
51 |
+
gradient_checkpointing: true
|
52 |
+
early_stopping_patience: null
|
53 |
+
resume_from_checkpoint: null
|
54 |
+
local_rank: null
|
55 |
+
logging_steps: 1
|
56 |
+
xformers_attention: null
|
57 |
+
flash_attention: true
|
58 |
+
warmup_steps: 10
|
59 |
+
evals_per_epoch: 4
|
60 |
+
eval_table_size: null
|
61 |
+
eval_max_new_tokens: 128
|
62 |
+
saves_per_epoch: 1
|
63 |
+
save_total_limit: 4
|
64 |
+
debug: true
|
65 |
+
deepspeed: deepspeed_configs/zero2.json
|
66 |
+
weight_decay: 0.0
|
67 |
+
special_tokens:
|
68 |
+
pad_token: <|end_of_text|>
|
69 |
+
```
|
70 |
+
|
71 |
+
</details><br>
|
72 |
+
|
73 |
+
# data/output/1min-v2-luxia-8b
|
74 |
+
|
75 |
+
This model is a fine-tuned version of [maywell/Llama-3-Ko-Luxia-Instruct](https://huggingface.co/maywell/Llama-3-Ko-Luxia-Instruct) on the None dataset.
|
76 |
+
It achieves the following results on the evaluation set:
|
77 |
+
- Loss: 2.1100
|
78 |
+
|
79 |
+
## Model description
|
80 |
+
|
81 |
+
More information needed
|
82 |
+
|
83 |
+
## Intended uses & limitations
|
84 |
+
|
85 |
+
More information needed
|
86 |
+
|
87 |
+
## Training and evaluation data
|
88 |
+
|
89 |
+
More information needed
|
90 |
+
|
91 |
+
## Training procedure
|
92 |
+
|
93 |
+
### Training hyperparameters
|
94 |
+
|
95 |
+
The following hyperparameters were used during training:
|
96 |
+
- learning_rate: 2e-06
|
97 |
+
- train_batch_size: 1
|
98 |
+
- eval_batch_size: 1
|
99 |
+
- seed: 42
|
100 |
+
- distributed_type: multi-GPU
|
101 |
+
- num_devices: 7
|
102 |
+
- gradient_accumulation_steps: 4
|
103 |
+
- total_train_batch_size: 28
|
104 |
+
- total_eval_batch_size: 7
|
105 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
106 |
+
- lr_scheduler_type: cosine
|
107 |
+
- lr_scheduler_warmup_steps: 10
|
108 |
+
- num_epochs: 10
|
109 |
+
|
110 |
+
### Training results
|
111 |
+
|
112 |
+
| Training Loss | Epoch | Step | Validation Loss |
|
113 |
+
|:-------------:|:------:|:----:|:---------------:|
|
114 |
+
| 2.6145 | 0.0513 | 1 | 2.7217 |
|
115 |
+
| 2.7668 | 0.2564 | 5 | 2.7018 |
|
116 |
+
| 2.6304 | 0.5128 | 10 | 2.5065 |
|
117 |
+
| 2.3635 | 0.7692 | 15 | 2.3580 |
|
118 |
+
| 2.4553 | 1.0256 | 20 | 2.2813 |
|
119 |
+
| 2.2344 | 1.2436 | 25 | 2.2339 |
|
120 |
+
| 2.4562 | 1.5 | 30 | 2.2017 |
|
121 |
+
| 2.0943 | 1.7564 | 35 | 2.1726 |
|
122 |
+
| 2.0695 | 2.0128 | 40 | 2.1425 |
|
123 |
+
| 1.8616 | 2.2308 | 45 | 2.1171 |
|
124 |
+
| 2.0498 | 2.4872 | 50 | 2.1040 |
|
125 |
+
| 1.9028 | 2.7436 | 55 | 2.0984 |
|
126 |
+
| 1.9057 | 3.0 | 60 | 2.0841 |
|
127 |
+
| 1.7464 | 3.2179 | 65 | 2.0784 |
|
128 |
+
| 1.8284 | 3.4744 | 70 | 2.0788 |
|
129 |
+
| 1.8866 | 3.7308 | 75 | 2.0761 |
|
130 |
+
| 1.8927 | 3.9872 | 80 | 2.0673 |
|
131 |
+
| 1.5778 | 4.2051 | 85 | 2.0779 |
|
132 |
+
| 1.7274 | 4.4615 | 90 | 2.0934 |
|
133 |
+
| 1.7431 | 4.7179 | 95 | 2.0652 |
|
134 |
+
| 1.8728 | 4.9744 | 100 | 2.0618 |
|
135 |
+
| 1.5729 | 5.1923 | 105 | 2.0837 |
|
136 |
+
| 1.4631 | 5.4487 | 110 | 2.0873 |
|
137 |
+
| 1.4758 | 5.7051 | 115 | 2.0744 |
|
138 |
+
| 1.5289 | 5.9615 | 120 | 2.0899 |
|
139 |
+
| 1.515 | 6.1795 | 125 | 2.0919 |
|
140 |
+
| 1.5757 | 6.4359 | 130 | 2.0978 |
|
141 |
+
| 1.5392 | 6.6923 | 135 | 2.0986 |
|
142 |
+
| 1.5764 | 6.9487 | 140 | 2.0977 |
|
143 |
+
| 1.4178 | 7.1667 | 145 | 2.0938 |
|
144 |
+
| 1.5983 | 7.4231 | 150 | 2.1006 |
|
145 |
+
| 1.5096 | 7.6795 | 155 | 2.1044 |
|
146 |
+
| 1.483 | 7.9359 | 160 | 2.1065 |
|
147 |
+
| 1.4619 | 8.1538 | 165 | 2.1057 |
|
148 |
+
| 1.3028 | 8.4103 | 170 | 2.1074 |
|
149 |
+
| 1.4681 | 8.6667 | 175 | 2.1090 |
|
150 |
+
| 1.4215 | 8.9231 | 180 | 2.1089 |
|
151 |
+
| 1.4686 | 9.1410 | 185 | 2.1094 |
|
152 |
+
| 1.5154 | 9.3974 | 190 | 2.1100 |
|
153 |
+
|
154 |
+
|
155 |
+
### Framework versions
|
156 |
+
|
157 |
+
- Transformers 4.40.2
|
158 |
+
- Pytorch 2.1.2+cu118
|
159 |
+
- Datasets 2.19.1
|
160 |
+
- Tokenizers 0.19.1
|