speecht5_finetuned_librispeech_polish_epo6_batch2_gas2
This model is a fine-tuned version of dawid511/speecht5_finetuned_librispeech_polish_epo3_batch8_gas4 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.3612
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 6
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.7252 | 0.2558 | 100 | 0.3732 |
0.7795 | 0.5115 | 200 | 0.3882 |
0.7822 | 0.7673 | 300 | 0.3806 |
0.7795 | 1.0230 | 400 | 0.3819 |
0.7723 | 1.2788 | 500 | 0.3776 |
0.7567 | 1.5345 | 600 | 0.3782 |
0.7719 | 1.7903 | 700 | 0.3794 |
0.7775 | 2.0460 | 800 | 0.3737 |
0.7635 | 2.3018 | 900 | 0.3744 |
0.7613 | 2.5575 | 1000 | 0.3751 |
0.7519 | 2.8133 | 1100 | 0.3714 |
0.7514 | 3.0691 | 1200 | 0.3697 |
0.7461 | 3.3248 | 1300 | 0.3711 |
0.7462 | 3.5806 | 1400 | 0.3708 |
0.7407 | 3.8363 | 1500 | 0.3678 |
0.7309 | 4.0921 | 1600 | 0.3691 |
0.7243 | 4.3478 | 1700 | 0.3670 |
0.7218 | 4.6036 | 1800 | 0.3662 |
0.7312 | 4.8593 | 1900 | 0.3645 |
0.7086 | 5.1151 | 2000 | 0.3636 |
0.7339 | 5.3708 | 2100 | 0.3637 |
0.718 | 5.6266 | 2200 | 0.3623 |
0.7171 | 5.8824 | 2300 | 0.3612 |
Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 17
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.