easyword-model-peft-distilled-1.3B
This model is a fine-tuned version of facebook/nllb-200-distilled-1.3B on the None dataset. It achieves the following results on the evaluation set:
- Loss: 3.1607
- Bleu: 0.0
- Gen Len: 5.9876
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
---|---|---|---|---|---|
No log | 1.0 | 31 | 4.9650 | 0.0 | 9.3975 |
No log | 2.0 | 62 | 4.4353 | 0.1522 | 8.6957 |
No log | 3.0 | 93 | 3.8967 | 1.2792 | 6.8137 |
No log | 4.0 | 124 | 3.6053 | 2.6004 | 6.0062 |
No log | 5.0 | 155 | 3.5239 | 2.9339 | 5.8571 |
No log | 6.0 | 186 | 3.4692 | 2.6031 | 5.8261 |
No log | 7.0 | 217 | 3.4244 | 2.6536 | 5.795 |
No log | 8.0 | 248 | 3.3865 | 2.6445 | 5.8509 |
No log | 9.0 | 279 | 3.3555 | 2.5482 | 5.9193 |
No log | 10.0 | 310 | 3.3325 | 3.087 | 5.913 |
No log | 11.0 | 341 | 3.3141 | 3.3511 | 5.9006 |
No log | 12.0 | 372 | 3.2986 | 3.3511 | 5.8944 |
No log | 13.0 | 403 | 3.2871 | 3.9871 | 5.8758 |
No log | 14.0 | 434 | 3.2787 | 3.3083 | 5.882 |
No log | 15.0 | 465 | 3.2738 | 3.3083 | 5.882 |
No log | 16.0 | 496 | 3.2720 | 3.3083 | 5.882 |
Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3