Paladiso commited on
Commit
e92ea2f
·
verified ·
1 Parent(s): 89db551

End of training

Browse files
Files changed (1) hide show
  1. README.md +16 -19
README.md CHANGED
@@ -1,14 +1,13 @@
1
  ---
2
  library_name: peft
3
- license: apache-2.0
4
- base_model: Intel/neural-chat-7b-v3-3
5
  tags:
6
  - axolotl
7
  - generated_from_trainer
8
  datasets:
9
- - Paladiso/dataset_134ef63e-6bde-4baa-a916-4167b60735a2
10
  model-index:
11
- - name: 6ce3517a-be06-447f-9280-97565278564b
12
  results: []
13
  ---
14
 
@@ -21,17 +20,17 @@ should probably proofread and complete it, then remove this comment. -->
21
  axolotl version: `0.6.0`
22
  ```yaml
23
  adapter: lora
24
- base_model: Intel/neural-chat-7b-v3-3
25
  bf16: auto
26
  chat_template: llama3
27
  dataset_prepared_path: /workspace/axolotl/data/prepared
28
  datasets:
29
  - ds_type: json
30
  format: custom
31
- path: Paladiso/dataset_134ef63e-6bde-4baa-a916-4167b60735a2
32
  type:
33
- field_instruction: title
34
- field_output: text
35
  system_format: '{system}'
36
  system_prompt: ''
37
  debug: null
@@ -47,7 +46,7 @@ fsdp_config: null
47
  gradient_accumulation_steps: 4
48
  gradient_checkpointing: false
49
  group_by_length: false
50
- hub_model_id: Paladiso/6ce3517a-be06-447f-9280-97565278564b
51
  hub_private_repo: true
52
  hub_repo: null
53
  hub_strategy: checkpoint
@@ -78,8 +77,6 @@ sample_packing: false
78
  save_safetensors: true
79
  saves_per_epoch: 4
80
  sequence_len: 512
81
- special_tokens:
82
- pad_token: </s>
83
  strict: false
84
  tf32: false
85
  tokenizer_type: AutoTokenizer
@@ -89,10 +86,10 @@ use_accelerate: true
89
  val_set_size: 0.05
90
  wandb_entity: null
91
  wandb_mode: online
92
- wandb_name: 134ef63e-6bde-4baa-a916-4167b60735a2
93
  wandb_project: Gradients-On-Demand
94
  wandb_run: your_name
95
- wandb_runid: 134ef63e-6bde-4baa-a916-4167b60735a2
96
  warmup_steps: 10
97
  weight_decay: 0.0
98
  xformers_attention: null
@@ -101,11 +98,11 @@ xformers_attention: null
101
 
102
  </details><br>
103
 
104
- # 6ce3517a-be06-447f-9280-97565278564b
105
 
106
- This model is a fine-tuned version of [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) on the Paladiso/dataset_134ef63e-6bde-4baa-a916-4167b60735a2 dataset.
107
  It achieves the following results on the evaluation set:
108
- - Loss: 0.9266
109
 
110
  ## Model description
111
 
@@ -139,9 +136,9 @@ The following hyperparameters were used during training:
139
 
140
  | Training Loss | Epoch | Step | Validation Loss |
141
  |:-------------:|:------:|:----:|:---------------:|
142
- | 3.8278 | 0.0066 | 3 | 0.9939 |
143
- | 4.5361 | 0.0133 | 6 | 0.9500 |
144
- | 5.5298 | 0.0199 | 9 | 0.9266 |
145
 
146
 
147
  ### Framework versions
 
1
  ---
2
  library_name: peft
3
+ base_model: katuni4ka/tiny-random-qwen1.5-moe
 
4
  tags:
5
  - axolotl
6
  - generated_from_trainer
7
  datasets:
8
+ - Paladiso/dataset_5e3967ba-d17c-4d93-91ec-a23620abb5dc
9
  model-index:
10
+ - name: 52d5acd4-4d39-43e2-8519-fe17241200b6
11
  results: []
12
  ---
13
 
 
20
  axolotl version: `0.6.0`
21
  ```yaml
22
  adapter: lora
23
+ base_model: katuni4ka/tiny-random-qwen1.5-moe
24
  bf16: auto
25
  chat_template: llama3
26
  dataset_prepared_path: /workspace/axolotl/data/prepared
27
  datasets:
28
  - ds_type: json
29
  format: custom
30
+ path: Paladiso/dataset_5e3967ba-d17c-4d93-91ec-a23620abb5dc
31
  type:
32
+ field_instruction: instruction
33
+ field_output: output
34
  system_format: '{system}'
35
  system_prompt: ''
36
  debug: null
 
46
  gradient_accumulation_steps: 4
47
  gradient_checkpointing: false
48
  group_by_length: false
49
+ hub_model_id: Paladiso/52d5acd4-4d39-43e2-8519-fe17241200b6
50
  hub_private_repo: true
51
  hub_repo: null
52
  hub_strategy: checkpoint
 
77
  save_safetensors: true
78
  saves_per_epoch: 4
79
  sequence_len: 512
 
 
80
  strict: false
81
  tf32: false
82
  tokenizer_type: AutoTokenizer
 
86
  val_set_size: 0.05
87
  wandb_entity: null
88
  wandb_mode: online
89
+ wandb_name: 5e3967ba-d17c-4d93-91ec-a23620abb5dc
90
  wandb_project: Gradients-On-Demand
91
  wandb_run: your_name
92
+ wandb_runid: 5e3967ba-d17c-4d93-91ec-a23620abb5dc
93
  warmup_steps: 10
94
  weight_decay: 0.0
95
  xformers_attention: null
 
98
 
99
  </details><br>
100
 
101
+ # 52d5acd4-4d39-43e2-8519-fe17241200b6
102
 
103
+ This model is a fine-tuned version of [katuni4ka/tiny-random-qwen1.5-moe](https://huggingface.co/katuni4ka/tiny-random-qwen1.5-moe) on the Paladiso/dataset_5e3967ba-d17c-4d93-91ec-a23620abb5dc dataset.
104
  It achieves the following results on the evaluation set:
105
+ - Loss: 11.9366
106
 
107
  ## Model description
108
 
 
136
 
137
  | Training Loss | Epoch | Step | Validation Loss |
138
  |:-------------:|:------:|:----:|:---------------:|
139
+ | 11.9333 | 0.0002 | 3 | 11.9376 |
140
+ | 11.9358 | 0.0004 | 6 | 11.9372 |
141
+ | 11.9335 | 0.0007 | 9 | 11.9366 |
142
 
143
 
144
  ### Framework versions