chungnam_firestation_tiny_model

This model is a fine-tuned version of openai/whisper-tiny on the Marcusxx/chungnam_firestation dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0616
  • Cer: 59.7122

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 250
  • training_steps: 2000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Cer
1.9425 0.6623 100 1.5318 164.8345
0.9552 1.3245 200 0.9569 148.0576
0.6638 1.9868 300 0.6359 126.2734
0.3385 2.6490 400 0.4440 113.9856
0.1794 3.3113 500 0.2976 96.2590
0.1331 3.9735 600 0.2152 184.6619
0.0666 4.6358 700 0.1515 97.4964
0.0354 5.2980 800 0.1093 77.2662
0.0311 5.9603 900 0.0887 77.2950
0.015 6.6225 1000 0.0749 102.1007
0.0089 7.2848 1100 0.0686 68.5180
0.0052 7.9470 1200 0.0646 65.3237
0.0036 8.6093 1300 0.0614 59.2518
0.0029 9.2715 1400 0.0607 64.6043
0.0025 9.9338 1500 0.0607 67.5396
0.003 10.5960 1600 0.0612 59.2806
0.0019 11.2583 1700 0.0614 56.1151
0.0018 11.9205 1800 0.0613 55.6547
0.0017 12.5828 1900 0.0616 60.1151
0.0017 13.2450 2000 0.0616 59.7122

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.2.2+cu121
  • Datasets 3.2.0
  • Tokenizers 0.19.1
Downloads last month
4
Safetensors
Model size
37.8M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for Marcusxx/chungnam_firestation_tiny_model

Finetuned
(1300)
this model

Dataset used to train Marcusxx/chungnam_firestation_tiny_model