Training in progress, step 99000
Browse files- README.md +6 -6
- config.json +1 -1
- generation_config.json +1 -1
- logs/attn_norm=None, attn_projector=orthogonal, attn_weight=5, learning_rate=0.0002, per_device_train_batch_size=4, warmup_ratio=0/completed.flag +0 -0
- logs/attn_norm=None, attn_projector=orthogonal, attn_weight=5, learning_rate=0.0002, per_device_train_batch_size=4, warmup_ratio=0/events.out.tfevents.1725152625.e3f806ea38c9 +2 -2
- logs/attn_norm=rmsnorm, attn_projector=orthogonal, attn_weight=5, learning_rate=0.0002, per_device_train_batch_size=4, warmup_ratio=0/events.out.tfevents.1725152766.e3f806ea38c9 +3 -0
- model.safetensors +1 -1
- training_args.bin +1 -1
README.md
CHANGED
@@ -44,7 +44,7 @@ More information needed
|
|
44 |
|
45 |
# Resource Usage Comparison
|
46 |
|
47 |
-
- VRAM Use: 7.
|
48 |
|
49 |
# Distillation (Teacher -> Student) Architecture Difference:
|
50 |
|
@@ -85,7 +85,7 @@ Trained on 226,096,614 tokens from the [wikimedia/wikipedia](https://huggingface
|
|
85 |
# Training Objective
|
86 |
|
87 |
```
|
88 |
-
DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, projector=
|
89 |
```
|
90 |
|
91 |
# Hyperparameters
|
@@ -101,9 +101,9 @@ The following hyperparameters were used during training:
|
|
101 |
- optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
|
102 |
- lr_scheduler_type: `polynomial`
|
103 |
- num_epochs: `1.0`
|
104 |
-
- distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, projector=
|
105 |
- train_embeddings: `True`
|
106 |
-
- lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at
|
107 |
- student_model_name_or_path: `None`
|
108 |
- student_config_name_or_path: `distilbert/distilgpt2`
|
109 |
- student_model_config: `None`
|
@@ -133,6 +133,6 @@ The following hyperparameters were used during training:
|
|
133 |
|
134 |
# Framework Versions
|
135 |
- Distily 0.4.1
|
136 |
-
- Transformers 4.44.
|
137 |
- Pytorch 2.4.0+cu121
|
138 |
-
- Datasets 2.
|
|
|
44 |
|
45 |
# Resource Usage Comparison
|
46 |
|
47 |
+
- VRAM Use: 7.4164 GB
|
48 |
|
49 |
# Distillation (Teacher -> Student) Architecture Difference:
|
50 |
|
|
|
85 |
# Training Objective
|
86 |
|
87 |
```
|
88 |
+
DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, projector=orthogonal))
|
89 |
```
|
90 |
|
91 |
# Hyperparameters
|
|
|
101 |
- optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
|
102 |
- lr_scheduler_type: `polynomial`
|
103 |
- num_epochs: `1.0`
|
104 |
+
- distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, projector=orthogonal))`
|
105 |
- train_embeddings: `True`
|
106 |
+
- lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7f57c9a3ea10>`
|
107 |
- student_model_name_or_path: `None`
|
108 |
- student_config_name_or_path: `distilbert/distilgpt2`
|
109 |
- student_model_config: `None`
|
|
|
133 |
|
134 |
# Framework Versions
|
135 |
- Distily 0.4.1
|
136 |
+
- Transformers 4.44.1
|
137 |
- Pytorch 2.4.0+cu121
|
138 |
+
- Datasets 2.21.0
|
config.json
CHANGED
@@ -40,7 +40,7 @@
|
|
40 |
}
|
41 |
},
|
42 |
"torch_dtype": "bfloat16",
|
43 |
-
"transformers_version": "4.44.
|
44 |
"use_cache": true,
|
45 |
"vocab_size": 50257
|
46 |
}
|
|
|
40 |
}
|
41 |
},
|
42 |
"torch_dtype": "bfloat16",
|
43 |
+
"transformers_version": "4.44.1",
|
44 |
"use_cache": true,
|
45 |
"vocab_size": 50257
|
46 |
}
|
generation_config.json
CHANGED
@@ -2,5 +2,5 @@
|
|
2 |
"_from_model_config": true,
|
3 |
"bos_token_id": 50256,
|
4 |
"eos_token_id": 50256,
|
5 |
-
"transformers_version": "4.44.
|
6 |
}
|
|
|
2 |
"_from_model_config": true,
|
3 |
"bos_token_id": 50256,
|
4 |
"eos_token_id": 50256,
|
5 |
+
"transformers_version": "4.44.1"
|
6 |
}
|
logs/attn_norm=None, attn_projector=orthogonal, attn_weight=5, learning_rate=0.0002, per_device_train_batch_size=4, warmup_ratio=0/completed.flag
ADDED
File without changes
|
logs/attn_norm=None, attn_projector=orthogonal, attn_weight=5, learning_rate=0.0002, per_device_train_batch_size=4, warmup_ratio=0/events.out.tfevents.1725152625.e3f806ea38c9
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c32dc5926638a3ede5a171d8b6e6d98a5bfd89f9c2c644d67b4929da341392d1
|
3 |
+
size 529
|
logs/attn_norm=rmsnorm, attn_projector=orthogonal, attn_weight=5, learning_rate=0.0002, per_device_train_batch_size=4, warmup_ratio=0/events.out.tfevents.1725152766.e3f806ea38c9
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d6cd6cfef7c29c9721d7efa5cd60bf29298647ad20377278a7d3e7eb35ef0f7c
|
3 |
+
size 47486050
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 163832792
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:363bb36cfeee39dc5404e2642d29bc23d9c67a635c0db6e9b7417e7cbec3b497
|
3 |
size 163832792
|
training_args.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 5560
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cae73e29762f617bdb9fa139016c8068dd595abe8212755c4225cd860c968669
|
3 |
size 5560
|