willtensora commited on
Commit
488eaa0
·
verified ·
1 Parent(s): 9495ae5

End of training

Browse files
Files changed (3) hide show
  1. README.md +15 -31
  2. generation_config.json +2 -2
  3. pytorch_model.bin +2 -2
README.md CHANGED
@@ -1,12 +1,11 @@
1
  ---
2
  library_name: transformers
3
- license: apache-2.0
4
- base_model: JackFram/llama-68m
5
  tags:
6
  - axolotl
7
  - generated_from_trainer
8
  model-index:
9
- - name: 4ada8092-cc1e-445c-9260-a580ef2586ae
10
  results: []
11
  ---
12
 
@@ -18,19 +17,19 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  axolotl version: `0.4.1`
20
  ```yaml
21
- base_model: JackFram/llama-68m
22
  batch_size: 32
23
  bf16: true
24
  chat_template: tokenizer_default_fallback_alpaca
25
  datasets:
26
  - data_files:
27
- - ff3a521d02fa72b2_train_data.json
28
  ds_type: json
29
  format: custom
30
- path: /workspace/input_data/ff3a521d02fa72b2_train_data.json
31
  type:
32
- field_instruction: context
33
- field_output: question
34
  format: '{instruction}'
35
  no_input_format: '{instruction}'
36
  system_format: '{system}'
@@ -40,7 +39,7 @@ flash_attention: true
40
  gpu_memory_limit: 80GiB
41
  gradient_checkpointing: true
42
  group_by_length: true
43
- hub_model_id: willtensora/4ada8092-cc1e-445c-9260-a580ef2586ae
44
  hub_strategy: checkpoint
45
  learning_rate: 0.0002
46
  logging_steps: 10
@@ -56,15 +55,13 @@ sample_packing: false
56
  save_steps: 40
57
  save_total_limit: 1
58
  sequence_len: 2048
59
- special_tokens:
60
- pad_token: </s>
61
  tokenizer_type: LlamaTokenizerFast
62
  train_on_inputs: false
63
  trust_remote_code: true
64
  val_set_size: 0.1
65
  wandb_entity: ''
66
  wandb_mode: online
67
- wandb_name: JackFram/llama-68m-/workspace/input_data/ff3a521d02fa72b2_train_data.json
68
  wandb_project: Gradients-On-Demand
69
  wandb_run: your_name
70
  wandb_runid: default
@@ -75,11 +72,9 @@ xformers_attention: true
75
 
76
  </details><br>
77
 
78
- # 4ada8092-cc1e-445c-9260-a580ef2586ae
79
 
80
- This model is a fine-tuned version of [JackFram/llama-68m](https://huggingface.co/JackFram/llama-68m) on the None dataset.
81
- It achieves the following results on the evaluation set:
82
- - Loss: 0.2208
83
 
84
  ## Model description
85
 
@@ -108,24 +103,13 @@ The following hyperparameters were used during training:
108
  - total_eval_batch_size: 32
109
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
110
  - lr_scheduler_type: cosine
111
- - lr_scheduler_warmup_steps: 10
112
- - training_steps: 205
113
 
114
  ### Training results
115
 
116
- | Training Loss | Epoch | Step | Validation Loss |
117
- |:-------------:|:------:|:----:|:---------------:|
118
- | No log | 0.0006 | 1 | 6.7193 |
119
- | 1.5212 | 0.0122 | 20 | 1.0774 |
120
- | 0.7826 | 0.0244 | 40 | 0.6352 |
121
- | 0.5492 | 0.0366 | 60 | 0.4713 |
122
- | 0.3663 | 0.0488 | 80 | 0.3924 |
123
- | 0.3533 | 0.0610 | 100 | 0.3112 |
124
- | 0.2434 | 0.0732 | 120 | 0.2761 |
125
- | 0.2989 | 0.0854 | 140 | 0.2445 |
126
- | 0.2464 | 0.0976 | 160 | 0.2251 |
127
- | 0.2233 | 0.1098 | 180 | 0.2203 |
128
- | 0.2213 | 0.1220 | 200 | 0.2208 |
129
 
130
 
131
  ### Framework versions
 
1
  ---
2
  library_name: transformers
3
+ base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
 
4
  tags:
5
  - axolotl
6
  - generated_from_trainer
7
  model-index:
8
+ - name: dab16ec4-4ddf-4ee5-8888-3dc2a83f0f86
9
  results: []
10
  ---
11
 
 
17
 
18
  axolotl version: `0.4.1`
19
  ```yaml
20
+ base_model: trl-internal-testing/tiny-random-LlamaForCausalLM
21
  batch_size: 32
22
  bf16: true
23
  chat_template: tokenizer_default_fallback_alpaca
24
  datasets:
25
  - data_files:
26
+ - f4a61305a746447c_train_data.json
27
  ds_type: json
28
  format: custom
29
+ path: /workspace/input_data/f4a61305a746447c_train_data.json
30
  type:
31
+ field_instruction: sentence1
32
+ field_output: sentence2
33
  format: '{instruction}'
34
  no_input_format: '{instruction}'
35
  system_format: '{system}'
 
39
  gpu_memory_limit: 80GiB
40
  gradient_checkpointing: true
41
  group_by_length: true
42
+ hub_model_id: willtensora/dab16ec4-4ddf-4ee5-8888-3dc2a83f0f86
43
  hub_strategy: checkpoint
44
  learning_rate: 0.0002
45
  logging_steps: 10
 
55
  save_steps: 40
56
  save_total_limit: 1
57
  sequence_len: 2048
 
 
58
  tokenizer_type: LlamaTokenizerFast
59
  train_on_inputs: false
60
  trust_remote_code: true
61
  val_set_size: 0.1
62
  wandb_entity: ''
63
  wandb_mode: online
64
+ wandb_name: trl-internal-testing/tiny-random-LlamaForCausalLM-/workspace/input_data/f4a61305a746447c_train_data.json
65
  wandb_project: Gradients-On-Demand
66
  wandb_run: your_name
67
  wandb_runid: default
 
72
 
73
  </details><br>
74
 
75
+ # dab16ec4-4ddf-4ee5-8888-3dc2a83f0f86
76
 
77
+ This model is a fine-tuned version of [trl-internal-testing/tiny-random-LlamaForCausalLM](https://huggingface.co/trl-internal-testing/tiny-random-LlamaForCausalLM) on the None dataset.
 
 
78
 
79
  ## Model description
80
 
 
103
  - total_eval_batch_size: 32
104
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
105
  - lr_scheduler_type: cosine
106
+ - training_steps: 13
 
107
 
108
  ### Training results
109
 
110
+ | Training Loss | Epoch | Step | Validation Loss |
111
+ |:-------------:|:-----:|:----:|:---------------:|
112
+ | No log | 0.01 | 1 | 10.3686 |
 
 
 
 
 
 
 
 
 
 
113
 
114
 
115
  ### Framework versions
generation_config.json CHANGED
@@ -2,7 +2,7 @@
2
  "_from_model_config": true,
3
  "bos_token_id": 0,
4
  "do_sample": true,
5
- "eos_token_id": 2,
6
- "pad_token_id": 1,
7
  "transformers_version": "4.46.0"
8
  }
 
2
  "_from_model_config": true,
3
  "bos_token_id": 0,
4
  "do_sample": true,
5
+ "eos_token_id": 1,
6
+ "pad_token_id": 2,
7
  "transformers_version": "4.46.0"
8
  }
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:432d9ed4d450961d63ceeda6070006f3b7eae9f4bfd1ec6ba4cd115f7bdb6b5a
3
- size 136067757
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:70872e8af35c48abf8dc8a0f41f28f7673e23a762cd5f5a4707b0788bf617ebf
3
+ size 2071661