tuanna08go commited on
Commit
bb44f6d
·
verified ·
1 Parent(s): 57c5091

End of training

Browse files
Files changed (2) hide show
  1. README.md +11 -4
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -59,7 +59,7 @@ lora_model_dir: null
59
  lora_r: 8
60
  lora_target_linear: true
61
  lr_scheduler: cosine
62
- max_steps: 1
63
  micro_batch_size: 2
64
  mlflow_experiment_name: /tmp/d9bf134d8c3d5800_train_data.json
65
  model_type: AutoModelForCausalLM
@@ -86,7 +86,7 @@ wandb_name: c02917cc-8f87-4062-9fdc-c3e78cea4836
86
  wandb_project: Gradients-On-Demand
87
  wandb_run: your_name
88
  wandb_runid: c02917cc-8f87-4062-9fdc-c3e78cea4836
89
- warmup_steps: 1
90
  weight_decay: 0.0
91
  xformers_attention: null
92
 
@@ -97,6 +97,8 @@ xformers_attention: null
97
  # c02917cc-8f87-4062-9fdc-c3e78cea4836
98
 
99
  This model is a fine-tuned version of [tiiuae/falcon-rw-1b](https://huggingface.co/tiiuae/falcon-rw-1b) on the None dataset.
 
 
100
 
101
  ## Model description
102
 
@@ -123,14 +125,19 @@ The following hyperparameters were used during training:
123
  - total_train_batch_size: 8
124
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
125
  - lr_scheduler_type: cosine
126
- - lr_scheduler_warmup_steps: 2
127
- - training_steps: 1
128
 
129
  ### Training results
130
 
131
  | Training Loss | Epoch | Step | Validation Loss |
132
  |:-------------:|:------:|:----:|:---------------:|
133
  | No log | 0.0001 | 1 | 2.2269 |
 
 
 
 
 
134
 
135
 
136
  ### Framework versions
 
59
  lora_r: 8
60
  lora_target_linear: true
61
  lr_scheduler: cosine
62
+ max_steps: 50
63
  micro_batch_size: 2
64
  mlflow_experiment_name: /tmp/d9bf134d8c3d5800_train_data.json
65
  model_type: AutoModelForCausalLM
 
86
  wandb_project: Gradients-On-Demand
87
  wandb_run: your_name
88
  wandb_runid: c02917cc-8f87-4062-9fdc-c3e78cea4836
89
+ warmup_steps: 10
90
  weight_decay: 0.0
91
  xformers_attention: null
92
 
 
97
  # c02917cc-8f87-4062-9fdc-c3e78cea4836
98
 
99
  This model is a fine-tuned version of [tiiuae/falcon-rw-1b](https://huggingface.co/tiiuae/falcon-rw-1b) on the None dataset.
100
+ It achieves the following results on the evaluation set:
101
+ - Loss: 1.7612
102
 
103
  ## Model description
104
 
 
125
  - total_train_batch_size: 8
126
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
127
  - lr_scheduler_type: cosine
128
+ - lr_scheduler_warmup_steps: 10
129
+ - training_steps: 50
130
 
131
  ### Training results
132
 
133
  | Training Loss | Epoch | Step | Validation Loss |
134
  |:-------------:|:------:|:----:|:---------------:|
135
  | No log | 0.0001 | 1 | 2.2269 |
136
+ | 9.6147 | 0.0013 | 10 | 2.0748 |
137
+ | 7.7107 | 0.0026 | 20 | 1.8417 |
138
+ | 6.855 | 0.0038 | 30 | 1.7847 |
139
+ | 7.0684 | 0.0051 | 40 | 1.7659 |
140
+ | 6.9004 | 0.0064 | 50 | 1.7612 |
141
 
142
 
143
  ### Framework versions
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2ab99f3f0d4e804d9cb2c93113de625924047979d5099b5ce7980e020c1fd754
3
  size 25236362
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:36471be237bc34c1469507e13be244ad08cc98d9737f60a3bc121e7ee2d8144f
3
  size 25236362