File size: 3,637 Bytes
4ccaeff b1c1359 4ccaeff 952ff8e 4ccaeff b1c1359 4ccaeff 7f45ceb 4ccaeff b1c1359 4ccaeff b1c1359 4ccaeff b1c1359 4ccaeff |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 |
---
base_model: distilbert/distilgpt2
datasets:
- wikimedia/wikipedia
library_name: Distily
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distily_attn_distilgpt2_sweep
results: []
---
# Summary
Distilled with [Distily](https://github.com/lapp0/distily) library
using teacher model [gpt2](https://huggingface.co/gpt2)
on dataset [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia).
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment.
# Model description
More information needed
# Intended uses & limitations
More information needed
-->
# Model Architecture:
- **Architecture**: `GPT2LMHeadModel`
- **Total Parameters**: 81,912,576
- **Data Type (dtype)**: torch.bfloat16
- **Model Size**: 0.16 GB
# Benchmark Metrics Comparison
| Metric | |
| :--- |
# Resource Usage Comparison
- VRAM Use: 7.4193 GB
# Distillation (Teacher -> Student) Architecture Difference:
- **Architecture**: `GPT2LMHeadModel` -> `GPT2LMHeadModel`
- **Total Parameters**: 124,439,808 -> 81,912,576
- **Data Type (dtype)**: torch.bfloat16 -> torch.bfloat16
- **Model Size**: 0.24 GB -> 0.16 GB
<details>
<summary>Module Diff Details</summary>
```diff
--- teacher model modules
+++ student model modules
@@ -4,7 +4,7 @@
(wpe): Embedding(1024, 768)
(drop): Dropout(p=0.1, inplace=False)
(h): ModuleList(
- (0-11): 12 x GPT2Block(
+ (0-5): 6 x GPT2Block(
(ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(attn): GPT2FlashAttention2(
(c_attn): Conv1D()
```
</details>
<br/>
# Train Dataset
Trained on 226,096,614 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
- Num Samples: `396,000`
- Subset: `20231101.en`
- Split: `train`
# Training Objective
```
DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25, loss_fn=raw_mse, layer_mapper=layer-2, projector=orthogonal))
```
# Hyperparameters
The following hyperparameters were used during training:
<details>
<summary>Expand</summary>
- learning_rate: `0.0002`
- train_batch_size: `4`
- eval_batch_size: `8`
- seed: `42`
- optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
- lr_scheduler_type: `polynomial`
- num_epochs: `1.0`
- distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25, loss_fn=raw_mse, layer_mapper=layer-2, projector=orthogonal))`
- train_embeddings: `True`
- lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7f9b9f48dcf0>`
- student_model_name_or_path: `None`
- student_config_name_or_path: `distilbert/distilgpt2`
- student_model_config: `None`
- reinitialize_weights: `None`
- copy_teacher_modules: `[('lm_head', False)]`
- student_model_as_bitnet: `False`
- dropout: `None`
- teacher_model_name_or_path: `gpt2`
- teacher_load_in_8bit: `False`
- teacher_load_in_4bit: `False`
- dataset_uri: `wikimedia/wikipedia`
- dataset_subset: `20231101.en`
- dataset_split: `train`
- dataset_column_name: `text`
- dataset_sample_size: `400000`
- dataset_test_size: `0.01`
- gradient_accumulation_steps: `1`
- weight_decay: `0.0`
- max_grad_norm: `1.0`
- warmup_ratio: `0`
- warmup_steps: `0`
- gradient_checkpointing: `True`
</details>
<br/>
# Framework Versions
- Distily 0.4.1
- Transformers 4.44.1
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
|