lapp0 commited on
Commit
b2fea80
·
verified ·
1 Parent(s): 4e7313d

Training in progress, step 61875

Browse files
README.md CHANGED
@@ -44,42 +44,42 @@ More information needed
44
  | step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | tinystoriesppl | zhwikippl |
45
  | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
46
  | **teacher eval** | | 43.25 | 61.25 | | | | | 11.6875 | 19.125 |
47
- | 0 | 0 | 2473901162496.0 | 170424302305280.0 | 25.7744 | 30.269 | 82.593 | 10.341 | 4060086272.0 | 71468255805440.0 |
48
- | 2500 | 0.0404 | 952.0 | 8032.0 | 6.1202 | 30.2377 | 82.678 | 10.351 | 652.0 | 6432.0 |
49
- | 5000 | 0.0808 | 378.0 | 1880.0 | 5.0293 | 30.3345 | 82.414 | 10.318 | 270.0 | 288.0 |
50
- | 7500 | 0.1212 | 230.0 | 820.0 | 4.5127 | 30.2749 | 82.577 | 10.339 | 201.0 | 174.0 |
51
- | 10000 | 0.1616 | 173.0 | 628.0 | 4.2288 | 30.3713 | 82.315 | 10.306 | 152.0 | 172.0 |
52
- | 12500 | 0.2020 | 127.5 | 482.0 | 3.8554 | 30.4218 | 82.178 | 10.289 | 106.5 | 157.0 |
53
- | 15000 | 0.2424 | 109.0 | 436.0 | 3.6684 | 30.2501 | 82.644 | 10.347 | 87.5 | 148.0 |
54
- | 17500 | 0.2828 | 93.5 | 350.0 | 3.5226 | 30.2559 | 82.628 | 10.345 | 73.5 | 119.5 |
55
- | 20000 | 0.3232 | 75.5 | 282.0 | 3.3351 | 30.5792 | 81.755 | 10.236 | 64.5 | 114.5 |
56
- | 22500 | 0.3636 | 67.5 | 213.0 | 3.1522 | 30.4457 | 82.113 | 10.281 | 53.25 | 80.5 |
57
- | 25000 | 0.4040 | 64.0 | 197.0 | 3.0806 | 30.4623 | 82.069 | 10.275 | 45.0 | 95.5 |
58
- | 27500 | 0.4444 | 58.25 | 203.0 | 3.0308 | 30.2865 | 82.545 | 10.335 | 40.75 | 93.0 |
59
- | 30000 | 0.4848 | 59.5 | 195.0 | 3.0207 | 30.2442 | 82.66 | 10.349 | 43.25 | 61.5 |
60
- | 32500 | 0.5253 | 58.5 | 172.0 | 3.0015 | 30.3473 | 82.38 | 10.314 | 41.75 | 60.75 |
61
- | 35000 | 0.5657 | 57.25 | 171.0 | 2.9425 | 30.2295 | 82.701 | 10.354 | 38.0 | 51.5 |
62
- | 37500 | 0.6061 | 57.25 | 157.0 | 2.9166 | 31.0953 | 80.398 | 10.066 | 38.0 | 55.0 |
63
- | 40000 | 0.6465 | 54.75 | 160.0 | 2.8987 | 30.9607 | 80.747 | 10.11 | 35.5 | 58.75 |
64
- | 42500 | 0.6869 | 53.75 | 153.0 | 2.8783 | 30.2174 | 82.734 | 10.358 | 35.75 | 67.0 |
65
- | 45000 | 0.7273 | 50.25 | 136.0 | 2.7764 | 30.1636 | 82.881 | 10.377 | 30.375 | 41.25 |
66
- | 47500 | 0.7677 | 50.25 | 126.5 | 2.7509 | 30.252 | 82.639 | 10.346 | 29.375 | 38.0 |
67
- | 50000 | 0.8081 | 49.0 | 127.5 | 2.7362 | 30.1988 | 82.785 | 10.365 | 28.75 | 40.0 |
68
- | 52500 | 0.8485 | 48.5 | 121.0 | 2.7263 | 30.2772 | 82.57 | 10.338 | 29.0 | 35.5 |
69
- | 55000 | 0.8889 | 48.0 | 119.0 | 2.7102 | 30.1534 | 82.909 | 10.38 | 28.0 | 34.0 |
70
- | 57500 | 0.9293 | 47.5 | 118.5 | 2.7043 | 30.181 | 82.834 | 10.371 | 27.875 | 32.5 |
71
- | 60000 | 0.9697 | 47.5 | 118.0 | 2.7013 | 30.3322 | 82.421 | 10.319 | 27.75 | 32.25 |
72
- | 61875 | 1.0 | 47.5 | 118.0 | 2.7006 | 30.5306 | 81.885 | 10.252 | 27.625 | 32.25 |
73
 
74
  # Resource Usage Comparison
75
 
76
- - VRAM Use: 7.7830 GB
77
 
78
- `# Distillation (Teacher -> Student) Architecture Difference:
79
 
80
  - **Architecture**: `GPT2LMHeadModel` -> `GPT2LMHeadModel`
81
  - **Total Parameters**: 124,439,808 -> 124,439,808
82
- - **Data Type (dtype)**: 124439808 -> torch.bfloat16
83
  - **Model Size**: 0.24 GB -> 0.24 GB
84
 
85
  <details>
@@ -103,7 +103,7 @@ Trained on 145,744,973 tokens from the [wikimedia/wikipedia](https://huggingface
103
  # Training Objective
104
 
105
  ```
106
- DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=cos, layer_mapper=layer-2))
107
  ```
108
 
109
  # Hyperparameters
@@ -120,9 +120,9 @@ The following hyperparameters were used during training:
120
  - lr_scheduler_type: `linear`
121
  - lr_scheduler_warmup_ratio: `0.5`
122
  - num_epochs: `1.0`
123
- - distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=cos, layer_mapper=layer-2))`
124
  - train_embeddings: `True`
125
- - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7fb7a85a5a50>`
126
  - student_model_name_or_path: `None`
127
  - student_config_name_or_path: `None`
128
  - student_model_config: `None`
@@ -154,6 +154,6 @@ The following hyperparameters were used during training:
154
 
155
  # Framework Versions
156
  - Distily 0.2.0
157
- - Transformers 4.44.0
158
- - Pytorch 2.3.0
159
  - Datasets 2.21.0
 
44
  | step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | tinystoriesppl | zhwikippl |
45
  | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
46
  | **teacher eval** | | 43.25 | 61.25 | | | | | 11.6875 | 19.125 |
47
+ | 0 | 0 | 2473901162496.0 | 170424302305280.0 | 22.7948 | 25.4611 | 98.189 | 12.293 | 4060086272.0 | 71468255805440.0 |
48
+ | 2500 | 0.0404 | 800.0 | 6240.0 | 2.9668 | 25.4143 | 98.37 | 12.316 | 470.0 | 5024.0 |
49
+ | 5000 | 0.0808 | 326.0 | 1480.0 | 2.1695 | 25.4751 | 98.135 | 12.287 | 247.0 | 278.0 |
50
+ | 7500 | 0.1212 | 224.0 | 804.0 | 1.8398 | 25.4585 | 98.199 | 12.295 | 185.0 | 191.0 |
51
+ | 10000 | 0.1616 | 171.0 | 608.0 | 1.6411 | 25.4667 | 98.167 | 12.291 | 146.0 | 165.0 |
52
+ | 12500 | 0.2020 | 127.0 | 482.0 | 1.3752 | 25.4857 | 98.094 | 12.281 | 111.0 | 141.0 |
53
+ | 15000 | 0.2424 | 104.5 | 436.0 | 1.2406 | 25.4584 | 98.199 | 12.295 | 93.5 | 101.0 |
54
+ | 17500 | 0.2828 | 90.5 | 340.0 | 1.1275 | 25.4822 | 98.108 | 12.283 | 74.0 | 147.0 |
55
+ | 20000 | 0.3232 | 82.5 | 318.0 | 1.0364 | 25.4803 | 98.115 | 12.284 | 69.5 | 136.0 |
56
+ | 22500 | 0.3636 | 74.0 | 236.0 | 0.8965 | 25.4918 | 98.071 | 12.278 | 61.0 | 88.0 |
57
+ | 25000 | 0.4040 | 67.0 | 215.0 | 0.8526 | 25.4599 | 98.194 | 12.294 | 52.0 | 99.5 |
58
+ | 27500 | 0.4444 | 63.75 | 220.0 | 0.8130 | 25.4697 | 98.156 | 12.289 | 46.75 | 111.5 |
59
+ | 30000 | 0.4848 | 65.5 | 220.0 | 0.8063 | 25.4728 | 98.144 | 12.288 | 53.0 | 71.5 |
60
+ | 32500 | 0.5253 | 63.75 | 193.0 | 0.7915 | 25.4447 | 98.252 | 12.301 | 45.75 | 112.5 |
61
+ | 35000 | 0.5657 | 61.5 | 193.0 | 0.7347 | 25.4975 | 98.049 | 12.276 | 42.75 | 64.5 |
62
+ | 37500 | 0.6061 | 61.0 | 168.0 | 0.7146 | 25.4651 | 98.174 | 12.291 | 44.5 | 58.5 |
63
+ | 40000 | 0.6465 | 58.75 | 182.0 | 0.7022 | 25.4903 | 98.076 | 12.279 | 41.0 | 95.0 |
64
+ | 42500 | 0.6869 | 59.75 | 175.0 | 0.6748 | 25.4884 | 98.084 | 12.28 | 42.5 | 59.5 |
65
+ | 45000 | 0.7273 | 53.75 | 146.0 | 0.5747 | 25.4692 | 98.158 | 12.289 | 36.0 | 51.25 |
66
+ | 47500 | 0.7677 | 53.0 | 136.0 | 0.5532 | 25.4941 | 98.062 | 12.277 | 34.25 | 38.25 |
67
+ | 50000 | 0.8081 | 52.25 | 139.0 | 0.5372 | 25.4685 | 98.16 | 12.29 | 33.25 | 43.25 |
68
+ | 52500 | 0.8485 | 50.75 | 131.0 | 0.5245 | 25.4289 | 98.313 | 12.309 | 33.5 | 37.0 |
69
+ | 55000 | 0.8889 | 50.5 | 128.0 | 0.5085 | 25.4853 | 98.096 | 12.282 | 32.25 | 35.25 |
70
+ | 57500 | 0.9293 | 50.0 | 127.0 | 0.5024 | 25.483 | 98.105 | 12.283 | 31.875 | 33.75 |
71
+ | 60000 | 0.9697 | 49.75 | 126.0 | 0.4989 | 25.4171 | 98.359 | 12.315 | 31.625 | 33.25 |
72
+ | 61875 | 1.0 | 49.75 | 126.5 | 0.4982 | 25.4751 | 98.135 | 12.286 | 31.75 | 33.25 |
73
 
74
  # Resource Usage Comparison
75
 
76
+ - VRAM Use: 7.7851 GB
77
 
78
+ # Distillation (Teacher -> Student) Architecture Difference:
79
 
80
  - **Architecture**: `GPT2LMHeadModel` -> `GPT2LMHeadModel`
81
  - **Total Parameters**: 124,439,808 -> 124,439,808
82
+ - **Data Type (dtype)**: torch.bfloat16 -> torch.bfloat16
83
  - **Model Size**: 0.24 GB -> 0.24 GB
84
 
85
  <details>
 
103
  # Training Objective
104
 
105
  ```
106
+ DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25.0, loss_fn=kl, layer_mapper=layer-2))
107
  ```
108
 
109
  # Hyperparameters
 
120
  - lr_scheduler_type: `linear`
121
  - lr_scheduler_warmup_ratio: `0.5`
122
  - num_epochs: `1.0`
123
+ - distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25.0, loss_fn=kl, layer_mapper=layer-2))`
124
  - train_embeddings: `True`
125
+ - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7f0373c842b0>`
126
  - student_model_name_or_path: `None`
127
  - student_config_name_or_path: `None`
128
  - student_model_config: `None`
 
154
 
155
  # Framework Versions
156
  - Distily 0.2.0
157
+ - Transformers 4.44.1
158
+ - Pytorch 2.5.0.dev20240821+cu121
159
  - Datasets 2.21.0
config.json CHANGED
@@ -33,7 +33,7 @@
33
  }
34
  },
35
  "torch_dtype": "bfloat16",
36
- "transformers_version": "4.44.0",
37
  "use_cache": true,
38
  "vocab_size": 50257
39
  }
 
33
  }
34
  },
35
  "torch_dtype": "bfloat16",
36
+ "transformers_version": "4.44.1",
37
  "use_cache": true,
38
  "vocab_size": 50257
39
  }
generation_config.json CHANGED
@@ -2,5 +2,5 @@
2
  "_from_model_config": true,
3
  "bos_token_id": 50256,
4
  "eos_token_id": 50256,
5
- "transformers_version": "4.44.0"
6
  }
 
2
  "_from_model_config": true,
3
  "bos_token_id": 50256,
4
  "eos_token_id": 50256,
5
+ "transformers_version": "4.44.1"
6
  }
logs/attn_loss_fn=kl, attn_weight=25.0, layer_mapper=all, projector=linear/completed.flag ADDED
File without changes
logs/attn_loss_fn=kl, attn_weight=5, layer_mapper=last_k_2, projector=linear/events.out.tfevents.1724420015.e3f806ea38c9 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:089d2a2dc924987162cc1facc2d33020700c3baae45069212a79b6484d6a797f
3
+ size 29632522
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:132f76e21a60b4fcf53fb2eecf98d05c5d1fb7392c082d0536ab4c7f5bd02e1e
3
  size 248894656
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff6338fdb174c372a77a1353481ed44d2a503fab9f3f00bce47ea2268e6f3f0c
3
  size 248894656
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8c6af2f8f3d066af609c7323adb118fc5a6aad2e5366a39f8614efe68a72179d
3
  size 1017899144
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:71ef455847b63b48a12aa4a0dc55bf581102a59c12916d5dd2fc022bc2a79821
3
  size 1017899144