openai/whisper-medium
This model is a fine-tuned version of openai/whisper-medium on the Hanhpt23/GermanMed dataset. It achieves the following results on the evaluation set:
- Loss: 5.5611
- Wer: 107.4053
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
3.0873 | 1.0 | 765 | 3.7193 | 114.9828 |
2.0404 | 2.0 | 1530 | 3.8002 | 105.8553 |
1.1104 | 3.0 | 2295 | 4.2393 | 106.7738 |
0.6064 | 4.0 | 3060 | 4.4641 | 113.7773 |
0.3464 | 5.0 | 3825 | 4.8067 | 106.8886 |
0.2769 | 6.0 | 4590 | 5.0650 | 106.6590 |
0.2327 | 7.0 | 5355 | 5.0613 | 111.1366 |
0.181 | 8.0 | 6120 | 5.2019 | 104.7072 |
0.17 | 9.0 | 6885 | 5.3335 | 123.8806 |
0.1518 | 10.0 | 7650 | 5.3649 | 156.4294 |
0.1405 | 11.0 | 8415 | 5.4181 | 107.6349 |
0.1229 | 12.0 | 9180 | 5.4629 | 102.0666 |
0.1335 | 13.0 | 9945 | 5.4842 | 106.0850 |
0.1016 | 14.0 | 10710 | 5.4736 | 105.4535 |
0.119 | 15.0 | 11475 | 5.4178 | 109.1848 |
0.0979 | 16.0 | 12240 | 5.4872 | 106.7738 |
0.0919 | 17.0 | 13005 | 5.5066 | 105.7979 |
0.0835 | 18.0 | 13770 | 5.5156 | 117.8530 |
0.0979 | 19.0 | 14535 | 5.5140 | 109.9885 |
0.1001 | 20.0 | 15300 | 5.5611 | 107.4053 |
Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
- Downloads last month
- 14
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for Hanhpt23/whisper-medium-GermanMed-v1
Base model
openai/whisper-medium