wav2vec2-xls-r-300m-scandinavian-E4-100h-30-epochs-20250201_v2.2
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.5180
- Wer: 33.7460
- Cer: 9.2540
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5000
- num_epochs: 30
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
---|---|---|---|---|---|
3.3709 | 0.9153 | 1000 | 3.2919 | 100.0 | 99.9998 |
2.9006 | 1.8302 | 2000 | 2.8985 | 100.0 | 99.9998 |
1.0856 | 2.7451 | 3000 | 0.7243 | 59.2040 | 17.1565 |
0.6157 | 3.6600 | 4000 | 0.3552 | 34.2313 | 9.4613 |
0.4596 | 4.5748 | 5000 | 0.2829 | 27.9991 | 7.6917 |
0.4002 | 5.4897 | 6000 | 0.2588 | 26.1989 | 7.0489 |
0.8133 | 6.4046 | 7000 | 0.5522 | 30.9377 | 8.5844 |
1.244 | 7.3195 | 8000 | 1.0083 | 71.3211 | 25.5473 |
0.9065 | 8.2343 | 9000 | 0.6510 | 35.2864 | 9.6846 |
0.7912 | 9.1492 | 10000 | 0.5297 | 33.3216 | 9.1039 |
0.7898 | 10.0641 | 11000 | 0.5132 | 33.3569 | 9.1523 |
0.7686 | 10.9794 | 12000 | 0.5126 | 33.5706 | 9.1892 |
0.7419 | 11.8943 | 13000 | 0.5180 | 33.7460 | 9.2549 |
0.7229 | 12.8092 | 14000 | 0.5180 | 33.7438 | 9.2557 |
0.7368 | 13.7240 | 15000 | 0.5180 | 33.7481 | 9.2555 |
0.7861 | 14.6389 | 16000 | 0.5180 | 33.7385 | 9.2536 |
0.7975 | 15.5538 | 17000 | 0.5180 | 33.7406 | 9.2549 |
0.8011 | 16.4686 | 18000 | 0.5180 | 33.7481 | 9.2546 |
0.7773 | 17.3835 | 19000 | 0.5180 | 33.7460 | 9.2531 |
0.7719 | 18.2984 | 20000 | 0.5180 | 33.7449 | 9.2546 |
0.7456 | 19.2133 | 21000 | 0.5180 | 33.7449 | 9.2542 |
0.7024 | 20.1281 | 22000 | 0.5180 | 33.7449 | 9.2535 |
0.7654 | 21.0430 | 23000 | 0.5180 | 33.7449 | 9.2546 |
0.8013 | 21.9584 | 24000 | 0.5180 | 33.7460 | 9.2570 |
0.823 | 22.8732 | 25000 | 0.5180 | 33.7449 | 9.2551 |
0.8084 | 23.7881 | 26000 | 0.5180 | 33.7438 | 9.2536 |
0.8533 | 24.7030 | 27000 | 0.5180 | 33.7417 | 9.2533 |
0.7409 | 25.6178 | 28000 | 0.5180 | 33.7438 | 9.2535 |
0.7502 | 26.5327 | 29000 | 0.5180 | 33.7502 | 9.2548 |
0.7424 | 27.4476 | 30000 | 0.5180 | 33.7492 | 9.2559 |
0.7426 | 28.3625 | 31000 | 0.5180 | 33.7428 | 9.2538 |
0.7798 | 29.2773 | 32000 | 0.5180 | 33.7460 | 9.2540 |
Framework versions
- Transformers 4.48.2
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 0
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for davidilag/wav2vec2-xls-r-300m-scandinavian-E4-100h-30-epochs-20250201_v2.2
Base model
facebook/wav2vec2-xls-r-300m