metadata
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: whisper-medium-sb-lug-eng
results: []
whisper-medium-sb-lug-eng
This model is a fine-tuned version of openai/whisper-medium on the generator dataset. It achieves the following results on the evaluation set:
- Loss: 0.1064
- Wer Lug: 0.239
- Wer Eng: 0.147
- Wer Mean: 0.193
- Cer Lug: 0.075
- Cer Eng: 0.075
- Cer Mean: 0.075
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 12000
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer Lug | Wer Eng | Wer Mean | Cer Lug | Cer Eng | Cer Mean |
---|---|---|---|---|---|---|---|---|---|
0.6802 | 0.0417 | 500 | 0.2481 | 0.497 | 0.151 | 0.324 | 0.124 | 0.078 | 0.101 |
0.526 | 0.0833 | 1000 | 0.1814 | 0.362 | 0.374 | 0.368 | 0.103 | 0.236 | 0.17 |
0.4551 | 0.125 | 1500 | 0.1649 | 0.305 | 0.135 | 0.22 | 0.087 | 0.066 | 0.076 |
0.3982 | 0.1667 | 2000 | 0.1522 | 0.29 | 0.136 | 0.213 | 0.084 | 0.072 | 0.078 |
0.3781 | 0.2083 | 2500 | 0.1436 | 0.261 | 0.151 | 0.206 | 0.082 | 0.079 | 0.081 |
0.3749 | 0.25 | 3000 | 0.1374 | 0.267 | 0.256 | 0.262 | 0.082 | 0.151 | 0.117 |
0.3397 | 0.2917 | 3500 | 0.1358 | 0.272 | 0.161 | 0.217 | 0.085 | 0.094 | 0.09 |
0.321 | 0.3333 | 4000 | 0.1286 | 0.259 | 0.135 | 0.197 | 0.08 | 0.065 | 0.073 |
0.3014 | 0.375 | 4500 | 0.1271 | 0.252 | 0.178 | 0.215 | 0.078 | 0.139 | 0.108 |
0.3124 | 0.4167 | 5000 | 0.1241 | 0.252 | 0.132 | 0.192 | 0.077 | 0.063 | 0.07 |
0.2668 | 1.0383 | 5500 | 0.1190 | 0.249 | 0.128 | 0.188 | 0.076 | 0.062 | 0.069 |
0.2368 | 1.08 | 6000 | 0.1235 | 0.248 | 0.36 | 0.304 | 0.078 | 0.24 | 0.159 |
0.2188 | 1.1217 | 6500 | 0.1174 | 0.246 | 0.131 | 0.189 | 0.077 | 0.065 | 0.071 |
0.2226 | 1.1633 | 7000 | 0.1139 | 0.242 | 0.13 | 0.186 | 0.075 | 0.062 | 0.069 |
0.2268 | 1.205 | 7500 | 0.1158 | 0.241 | 0.136 | 0.188 | 0.077 | 0.066 | 0.072 |
0.241 | 1.2467 | 8000 | 0.1144 | 0.246 | 0.206 | 0.226 | 0.077 | 0.169 | 0.123 |
0.2315 | 1.2883 | 8500 | 0.1091 | 0.228 | 0.142 | 0.185 | 0.074 | 0.074 | 0.074 |
0.1932 | 1.33 | 9000 | 0.1106 | 0.235 | 0.128 | 0.181 | 0.075 | 0.061 | 0.068 |
0.2334 | 1.3717 | 9500 | 0.1091 | 0.239 | 0.138 | 0.189 | 0.074 | 0.072 | 0.073 |
0.1968 | 1.4133 | 10000 | 0.1072 | 0.237 | 0.129 | 0.183 | 0.074 | 0.063 | 0.069 |
0.1573 | 2.035 | 10500 | 0.1069 | 0.237 | 0.148 | 0.192 | 0.075 | 0.076 | 0.076 |
0.1527 | 2.0767 | 11000 | 0.1068 | 0.235 | 0.146 | 0.191 | 0.073 | 0.076 | 0.075 |
0.168 | 2.1183 | 11500 | 0.1061 | 0.235 | 0.145 | 0.19 | 0.074 | 0.073 | 0.074 |
0.1492 | 2.16 | 12000 | 0.1064 | 0.239 | 0.147 | 0.193 | 0.075 | 0.075 | 0.075 |
Framework versions
- Transformers 4.42.3
- Pytorch 2.2.0
- Datasets 2.20.0
- Tokenizers 0.19.1