ditransitives_removed_seed-42_1e-3
This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 3.1445
- Accuracy: 0.4045
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 32000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
6.036 | 0.9998 | 1525 | 4.3721 | 0.2972 |
3.946 | 1.9997 | 3050 | 3.8545 | 0.3376 |
3.6934 | 2.9995 | 4575 | 3.5792 | 0.3611 |
3.3956 | 4.0 | 6101 | 3.4280 | 0.3751 |
3.287 | 4.9998 | 7626 | 3.3301 | 0.3844 |
3.1683 | 5.9997 | 9151 | 3.2748 | 0.3895 |
3.1069 | 6.9995 | 10676 | 3.2356 | 0.3933 |
3.0496 | 8.0 | 12202 | 3.2118 | 0.3959 |
3.0073 | 8.9998 | 13727 | 3.1902 | 0.3981 |
2.9755 | 9.9997 | 15252 | 3.1811 | 0.3991 |
2.9444 | 10.9995 | 16777 | 3.1732 | 0.4007 |
2.9239 | 12.0 | 18303 | 3.1687 | 0.4017 |
2.9028 | 12.9998 | 19828 | 3.1596 | 0.4023 |
2.8881 | 13.9997 | 21353 | 3.1569 | 0.4028 |
2.8729 | 14.9995 | 22878 | 3.1514 | 0.4032 |
2.862 | 16.0 | 24404 | 3.1557 | 0.4036 |
2.8532 | 16.9998 | 25929 | 3.1493 | 0.4037 |
2.84 | 17.9997 | 27454 | 3.1471 | 0.4039 |
2.8419 | 18.9995 | 28979 | 3.1467 | 0.4041 |
2.8258 | 19.9967 | 30500 | 3.1445 | 0.4045 |
Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.20.0
- Downloads last month
- 7
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.