lapp0 commited on
Commit
0678ec3
·
verified ·
1 Parent(s): 60e8996

Training in progress, step 61875

Browse files
README.md CHANGED
@@ -44,42 +44,42 @@ More information needed
44
  | step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | tinystoriesppl | zhwikippl |
45
  | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
46
  | **teacher eval** | | 43.25 | 61.25 | | | | | 11.6875 | 19.125 |
47
- | 0 | 0 | 2473901162496.0 | 170424302305280.0 | 22.7948 | 25.4611 | 98.189 | 12.293 | 4060086272.0 | 71468255805440.0 |
48
- | 2500 | 0.0404 | 800.0 | 6240.0 | 2.9668 | 25.4143 | 98.37 | 12.316 | 470.0 | 5024.0 |
49
- | 5000 | 0.0808 | 326.0 | 1480.0 | 2.1695 | 25.4751 | 98.135 | 12.287 | 247.0 | 278.0 |
50
- | 7500 | 0.1212 | 224.0 | 804.0 | 1.8398 | 25.4585 | 98.199 | 12.295 | 185.0 | 191.0 |
51
- | 10000 | 0.1616 | 171.0 | 608.0 | 1.6411 | 25.4667 | 98.167 | 12.291 | 146.0 | 165.0 |
52
- | 12500 | 0.2020 | 127.0 | 482.0 | 1.3752 | 25.4857 | 98.094 | 12.281 | 111.0 | 141.0 |
53
- | 15000 | 0.2424 | 104.5 | 436.0 | 1.2406 | 25.4584 | 98.199 | 12.295 | 93.5 | 101.0 |
54
- | 17500 | 0.2828 | 90.5 | 340.0 | 1.1275 | 25.4822 | 98.108 | 12.283 | 74.0 | 147.0 |
55
- | 20000 | 0.3232 | 82.5 | 318.0 | 1.0364 | 25.4803 | 98.115 | 12.284 | 69.5 | 136.0 |
56
- | 22500 | 0.3636 | 74.0 | 236.0 | 0.8965 | 25.4918 | 98.071 | 12.278 | 61.0 | 88.0 |
57
- | 25000 | 0.4040 | 67.0 | 215.0 | 0.8526 | 25.4599 | 98.194 | 12.294 | 52.0 | 99.5 |
58
- | 27500 | 0.4444 | 63.75 | 220.0 | 0.8130 | 25.4697 | 98.156 | 12.289 | 46.75 | 111.5 |
59
- | 30000 | 0.4848 | 65.5 | 220.0 | 0.8063 | 25.4728 | 98.144 | 12.288 | 53.0 | 71.5 |
60
- | 32500 | 0.5253 | 63.75 | 193.0 | 0.7915 | 25.4447 | 98.252 | 12.301 | 45.75 | 112.5 |
61
- | 35000 | 0.5657 | 61.5 | 193.0 | 0.7347 | 25.4975 | 98.049 | 12.276 | 42.75 | 64.5 |
62
- | 37500 | 0.6061 | 61.0 | 168.0 | 0.7146 | 25.4651 | 98.174 | 12.291 | 44.5 | 58.5 |
63
- | 40000 | 0.6465 | 58.75 | 182.0 | 0.7022 | 25.4903 | 98.076 | 12.279 | 41.0 | 95.0 |
64
- | 42500 | 0.6869 | 59.75 | 175.0 | 0.6748 | 25.4884 | 98.084 | 12.28 | 42.5 | 59.5 |
65
- | 45000 | 0.7273 | 53.75 | 146.0 | 0.5747 | 25.4692 | 98.158 | 12.289 | 36.0 | 51.25 |
66
- | 47500 | 0.7677 | 53.0 | 136.0 | 0.5532 | 25.4941 | 98.062 | 12.277 | 34.25 | 38.25 |
67
- | 50000 | 0.8081 | 52.25 | 139.0 | 0.5372 | 25.4685 | 98.16 | 12.29 | 33.25 | 43.25 |
68
- | 52500 | 0.8485 | 50.75 | 131.0 | 0.5245 | 25.4289 | 98.313 | 12.309 | 33.5 | 37.0 |
69
- | 55000 | 0.8889 | 50.5 | 128.0 | 0.5085 | 25.4853 | 98.096 | 12.282 | 32.25 | 35.25 |
70
- | 57500 | 0.9293 | 50.0 | 127.0 | 0.5024 | 25.483 | 98.105 | 12.283 | 31.875 | 33.75 |
71
- | 60000 | 0.9697 | 49.75 | 126.0 | 0.4989 | 25.4171 | 98.359 | 12.315 | 31.625 | 33.25 |
72
- | 61875 | 1.0 | 49.75 | 126.5 | 0.4982 | 25.4751 | 98.135 | 12.286 | 31.75 | 33.25 |
73
 
74
  # Resource Usage Comparison
75
 
76
- - VRAM Use: 7.7851 GB
77
 
78
- # Distillation (Teacher -> Student) Architecture Difference:
79
 
80
  - **Architecture**: `GPT2LMHeadModel` -> `GPT2LMHeadModel`
81
  - **Total Parameters**: 124,439,808 -> 124,439,808
82
- - **Data Type (dtype)**: torch.bfloat16 -> torch.bfloat16
83
  - **Model Size**: 0.24 GB -> 0.24 GB
84
 
85
  <details>
@@ -93,7 +93,7 @@ More information needed
93
  <br/>
94
 
95
  # Train Dataset
96
- Trained on 145,744,973 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
97
 
98
  - Num Samples: `247,500`
99
  - Subset: `20231101.en`
@@ -103,7 +103,7 @@ Trained on 145,744,973 tokens from the [wikimedia/wikipedia](https://huggingface
103
  # Training Objective
104
 
105
  ```
106
- DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25.0, loss_fn=kl, layer_mapper=layer-2))
107
  ```
108
 
109
  # Hyperparameters
@@ -120,9 +120,9 @@ The following hyperparameters were used during training:
120
  - lr_scheduler_type: `linear`
121
  - lr_scheduler_warmup_ratio: `0.5`
122
  - num_epochs: `1.0`
123
- - distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25.0, loss_fn=kl, layer_mapper=layer-2))`
124
  - train_embeddings: `True`
125
- - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7f0373c842b0>`
126
  - student_model_name_or_path: `None`
127
  - student_config_name_or_path: `None`
128
  - student_model_config: `None`
@@ -154,6 +154,6 @@ The following hyperparameters were used during training:
154
 
155
  # Framework Versions
156
  - Distily 0.2.0
157
- - Transformers 4.44.1
158
- - Pytorch 2.5.0.dev20240821+cu121
159
  - Datasets 2.21.0
 
44
  | step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | tinystoriesppl | zhwikippl |
45
  | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
46
  | **teacher eval** | | 43.25 | 61.25 | | | | | 11.6875 | 19.125 |
47
+ | 0 | 0 | 957777707008.0 | 56624848830464.0 | 45.7836 | 30.1745 | 82.852 | 10.373 | 2566914048.0 | 36283883716608.0 |
48
+ | 2500 | 0.0404 | 2032.0 | 25856.0 | 20.6614 | 30.1889 | 82.812 | 10.368 | 1432.0 | 92672.0 |
49
+ | 5000 | 0.0808 | 488.0 | 3008.0 | 18.3376 | 30.1572 | 82.899 | 10.379 | 412.0 | 984.0 |
50
+ | 7500 | 0.1212 | 276.0 | 1488.0 | 16.9326 | 30.1713 | 82.86 | 10.374 | 239.0 | 231.0 |
51
+ | 10000 | 0.1616 | 204.0 | 880.0 | 16.0592 | 30.189 | 82.812 | 10.368 | 182.0 | 294.0 |
52
+ | 12500 | 0.2020 | 149.0 | 506.0 | 14.9386 | 30.151 | 82.916 | 10.381 | 125.0 | 158.0 |
53
+ | 15000 | 0.2424 | 119.5 | 470.0 | 14.3804 | 30.2036 | 82.772 | 10.363 | 86.5 | 144.0 |
54
+ | 17500 | 0.2828 | 93.5 | 430.0 | 14.0105 | 30.2613 | 82.614 | 10.343 | 72.5 | 178.0 |
55
+ | 20000 | 0.3232 | 77.0 | 280.0 | 13.4770 | 30.1964 | 82.791 | 10.365 | 59.75 | 85.5 |
56
+ | 22500 | 0.3636 | 63.75 | 219.0 | 12.9954 | 30.296 | 82.519 | 10.331 | 50.25 | 75.0 |
57
+ | 25000 | 0.4040 | 60.75 | 185.0 | 12.7840 | 30.2946 | 82.523 | 10.332 | 46.0 | 74.5 |
58
+ | 27500 | 0.4444 | 58.75 | 190.0 | 12.6366 | 30.3968 | 82.246 | 10.297 | 41.0 | 51.25 |
59
+ | 30000 | 0.4848 | 58.75 | 177.0 | 12.6497 | 30.3256 | 82.439 | 10.321 | 42.5 | 62.5 |
60
+ | 32500 | 0.5253 | 59.5 | 171.0 | 12.5958 | 30.3473 | 82.38 | 10.314 | 38.75 | 69.0 |
61
+ | 35000 | 0.5657 | 55.5 | 164.0 | 12.4809 | 30.4047 | 82.224 | 10.294 | 36.25 | 49.25 |
62
+ | 37500 | 0.6061 | 55.75 | 165.0 | 12.4218 | 30.2813 | 82.559 | 10.336 | 35.0 | 51.5 |
63
+ | 40000 | 0.6465 | 54.0 | 147.0 | 12.3726 | 30.199 | 82.784 | 10.365 | 33.75 | 51.75 |
64
+ | 42500 | 0.6869 | 55.0 | 144.0 | 12.3525 | 30.6915 | 81.456 | 10.198 | 34.0 | 53.75 |
65
+ | 45000 | 0.7273 | 50.75 | 129.0 | 12.1198 | 30.7649 | 81.262 | 10.174 | 29.875 | 36.5 |
66
+ | 47500 | 0.7677 | 50.5 | 122.5 | 12.0744 | 30.289 | 82.538 | 10.334 | 28.875 | 34.75 |
67
+ | 50000 | 0.8081 | 49.5 | 121.5 | 12.0388 | 30.3972 | 82.244 | 10.297 | 28.75 | 34.25 |
68
+ | 52500 | 0.8485 | 50.0 | 122.5 | 12.0214 | 30.351 | 82.37 | 10.313 | 28.5 | 38.75 |
69
+ | 55000 | 0.8889 | 49.5 | 119.0 | 11.9902 | 30.2303 | 82.698 | 10.354 | 27.625 | 34.5 |
70
+ | 57500 | 0.9293 | 49.25 | 119.0 | 11.9806 | 30.6005 | 81.698 | 10.229 | 27.625 | 33.25 |
71
+ | 60000 | 0.9697 | 49.25 | 118.0 | 11.9745 | 31.1957 | 80.139 | 10.033 | 27.5 | 33.0 |
72
+ | 61875 | 1.0 | 49.0 | 118.0 | 11.9734 | 31.2236 | 80.068 | 10.024 | 27.5 | 33.0 |
73
 
74
  # Resource Usage Comparison
75
 
76
+ - VRAM Use: 7.7830 GB
77
 
78
+ `# Distillation (Teacher -> Student) Architecture Difference:
79
 
80
  - **Architecture**: `GPT2LMHeadModel` -> `GPT2LMHeadModel`
81
  - **Total Parameters**: 124,439,808 -> 124,439,808
82
+ - **Data Type (dtype)**: 124439808 -> torch.bfloat16
83
  - **Model Size**: 0.24 GB -> 0.24 GB
84
 
85
  <details>
 
93
  <br/>
94
 
95
  # Train Dataset
96
+ Trained on 145,725,467 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
97
 
98
  - Num Samples: `247,500`
99
  - Subset: `20231101.en`
 
103
  # Training Objective
104
 
105
  ```
106
+ DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25.0, loss_fn=cos, layer_mapper=layer-2))
107
  ```
108
 
109
  # Hyperparameters
 
120
  - lr_scheduler_type: `linear`
121
  - lr_scheduler_warmup_ratio: `0.5`
122
  - num_epochs: `1.0`
123
+ - distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25.0, loss_fn=cos, layer_mapper=layer-2))`
124
  - train_embeddings: `True`
125
+ - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7fb6c83166e0>`
126
  - student_model_name_or_path: `None`
127
  - student_config_name_or_path: `None`
128
  - student_model_config: `None`
 
154
 
155
  # Framework Versions
156
  - Distily 0.2.0
157
+ - Transformers 4.44.0
158
+ - Pytorch 2.3.0
159
  - Datasets 2.21.0
config.json CHANGED
@@ -33,7 +33,7 @@
33
  }
34
  },
35
  "torch_dtype": "bfloat16",
36
- "transformers_version": "4.44.1",
37
  "use_cache": true,
38
  "vocab_size": 50257
39
  }
 
33
  }
34
  },
35
  "torch_dtype": "bfloat16",
36
+ "transformers_version": "4.44.0",
37
  "use_cache": true,
38
  "vocab_size": 50257
39
  }
generation_config.json CHANGED
@@ -2,5 +2,5 @@
2
  "_from_model_config": true,
3
  "bos_token_id": 50256,
4
  "eos_token_id": 50256,
5
- "transformers_version": "4.44.1"
6
  }
 
2
  "_from_model_config": true,
3
  "bos_token_id": 50256,
4
  "eos_token_id": 50256,
5
+ "transformers_version": "4.44.0"
6
  }
logs/attn_loss_fn=cos, attn_weight=25.0, layer_mapper=last, projector=orthogonal/events.out.tfevents.1724392710.f383272e719b ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b78287fdc9556dc3905a06e2f4f59ddb694faed2d0c0a1b6001809d1ca3ef468
3
+ size 29105737
logs/attn_loss_fn=cos, attn_weight=25.0, layer_mapper=last, projector=orthogonal/events.out.tfevents.1724402725.f383272e719b ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1418f2bbd9bc76367d0b4c051295fdeb6932d6e386dcfdab23f2f132fa7c9278
3
+ size 29632526
logs/attn_loss_fn=cos, attn_weight=25.0, layer_mapper=last, projector=orthogonal/events.out.tfevents.1724412586.f383272e719b ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:621dde369730aa54a7de3222f78cbfa7b5e67e0902de75a89c4f725f171b54cc
3
+ size 588
logs/attn_loss_fn=cos, attn_weight=5, layer_mapper=last, projector=orthogonal/completed.flag ADDED
File without changes
logs/attn_loss_fn=cos, attn_weight=5, layer_mapper=layer-2, projector=orthogonal/events.out.tfevents.1724412932.f383272e719b ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6280c99bf6909845a98c70aa2b6da6e29e95e5c69a2b8d001cc9192fe95dedb8
3
+ size 29632526
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3423566db7ab3a81a2bac79f51971d645dbdf75a579843ecf23086303845e51d
3
  size 248894656
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:132f76e21a60b4fcf53fb2eecf98d05c5d1fb7392c082d0536ab4c7f5bd02e1e
3
  size 248894656
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c3b1f9c427fec40b7ccb98d130a307ac55d433f884f3f582130f245e7b104daf
3
  size 1017899144
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c6af2f8f3d066af609c7323adb118fc5a6aad2e5366a39f8614efe68a72179d
3
  size 1017899144