CheeLi03's picture
Upload tokenizer
1a64dab verified
metadata
base_model: openai/whisper-tiny
language:
  - it
library_name: transformers
license: apache-2.0
metrics:
  - wer
tags:
  - hf-asr-leaderboard
  - generated_from_trainer
model-index:
  - name: Whisper Tiny Italian Combine 5k - Chee Li
    results: []

Whisper Tiny Italian Combine 5k - Chee Li

This model is a fine-tuned version of openai/whisper-tiny on the Google Fleurs dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4933
  • Wer: 52.2594

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.5398 0.0849 1000 0.6209 60.9740
0.4894 0.1699 2000 0.5541 56.0544
0.4558 0.2548 3000 0.5213 54.6387
0.4267 0.3398 4000 0.5010 52.4281
0.4225 0.4247 5000 0.4933 52.2594

Framework versions

  • Transformers 4.46.2
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.20.1