whisper-a-nomimo-16

This model is a fine-tuned version of openai/whisper-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0862
  • Wer: 25.1543

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0004
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 132
  • num_epochs: 16
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.9727 1.0 104 0.1997 47.0679
0.231 2.0 208 0.0566 178.1636
0.2066 3.0 312 0.2833 91.6667
0.2809 4.0 416 0.2589 91.9753
0.2872 5.0 520 0.2672 88.8889
0.2384 6.0 624 0.2239 110.1080
0.202 7.0 728 0.1959 79.7840
0.1828 8.0 832 0.1883 78.3951
0.1775 9.0 936 0.1908 79.1667
0.1496 10.0 1040 0.2103 87.8858
0.1162 11.0 1144 0.1416 54.3981
0.0674 12.0 1248 0.0975 61.5741
0.0449 13.0 1352 0.0775 36.4969
0.026 14.0 1456 0.0706 23.6883
0.0197 15.0 1560 0.0873 26.6204
0.0119 15.8502 1648 0.0862 25.1543

Framework versions

  • Transformers 4.48.0.dev0
  • Pytorch 2.4.0
  • Datasets 3.1.0
  • Tokenizers 0.21.0
Downloads last month
3
Safetensors
Model size
242M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for susmitabhatt/whisper-a-nomimo-16

Finetuned
(2155)
this model