speecht5_fine_tuned_dhivehi_tts_3

This model is a fine-tuned version of on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7593

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 128
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine_with_restarts
  • lr_scheduler_warmup_steps: 500
  • training_steps: 20000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
1.2327 5.2632 200 1.1988
1.2184 10.5263 400 1.1711
1.2007 15.7895 600 1.1521
1.1662 21.0526 800 1.1317
1.143 26.3158 1000 1.1228
1.1281 31.5789 1200 1.0680
1.1014 36.8421 1400 1.0495
1.0777 42.1053 1600 1.0578
1.067 47.3684 1800 1.0724
1.0389 52.6316 2000 1.0126
1.03 57.8947 2200 1.0089
1.0187 63.1579 2400 1.0203
1.0155 68.4211 2600 0.9795
0.9996 73.6842 2800 0.9728
0.9747 78.9474 3000 0.9522
0.9631 84.2105 3200 0.9460
0.9469 89.4737 3400 0.9317
0.935 94.7368 3600 0.9173
0.928 100.0 3800 0.9068
0.915 105.2632 4000 0.9426
0.9155 110.5263 4200 0.9063
0.9015 115.7895 4400 0.8896
0.8918 121.0526 4600 0.8787
0.8794 126.3158 4800 0.8818
0.8822 131.5789 5000 0.8811
0.8641 136.8421 5200 0.8719
0.8589 142.1053 5400 0.8590
0.8477 147.3684 5600 0.8575
0.8322 152.6316 5800 0.8720
0.8233 157.8947 6000 0.8520
0.8267 163.1579 6200 0.8496
0.8208 168.4211 6400 0.8556
0.8038 173.6842 6600 0.8512
0.8072 178.9474 6800 0.8318
0.7964 184.2105 7000 0.8299
0.7862 189.4737 7200 0.8261
0.7876 194.7368 7400 0.8247
0.7793 200.0 7600 0.8236
0.7753 205.2632 7800 0.8190
0.7675 210.5263 8000 0.8118
0.763 215.7895 8200 0.8198
0.757 221.0526 8400 0.8156
0.7538 226.3158 8600 0.7988
0.7481 231.5789 8800 0.8002
0.7423 236.8421 9000 0.8164
0.7481 242.1053 9200 0.8059
0.7385 247.3684 9400 0.7962
0.7304 252.6316 9600 0.8026
0.7238 257.8947 9800 0.7994
0.7233 263.1579 10000 0.7929
0.7205 268.4211 10200 0.7854
0.7131 273.6842 10400 0.7943
0.7118 278.9474 10600 0.7919
0.7037 284.2105 10800 0.7921
0.7079 289.4737 11000 0.7955
0.7011 294.7368 11200 0.7978
0.7051 300.0 11400 0.7845
0.7019 305.2632 11600 0.7853
0.6954 310.5263 11800 0.7724
0.6922 315.7895 12000 0.7752
0.6862 321.0526 12200 0.7825
0.6888 326.3158 12400 0.7777
0.6886 331.5789 12600 0.7740
0.679 336.8421 12800 0.7879
0.6756 342.1053 13000 0.7712
0.6778 347.3684 13200 0.7746
0.6702 352.6316 13400 0.7624
0.6743 357.8947 13600 0.7661
0.6727 363.1579 13800 0.7642
0.6643 368.4211 14000 0.7719
0.6651 373.6842 14200 0.7622
0.6632 378.9474 14400 0.7655
0.6572 384.2105 14600 0.7652
0.6575 389.4737 14800 0.7624
0.6581 394.7368 15000 0.7692
0.6581 400.0 15200 0.7635
0.6553 405.2632 15400 0.7558
0.6567 410.5263 15600 0.7507
0.6511 415.7895 15800 0.7633
0.6491 421.0526 16000 0.7558
0.6526 426.3158 16200 0.7643
0.6511 431.5789 16400 0.7577
0.6507 436.8421 16600 0.7571
0.6475 442.1053 16800 0.7535
0.6543 447.3684 17000 0.7593

Framework versions

  • Transformers 4.48.0.dev0
  • Pytorch 2.5.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
181
Safetensors
Model size
144M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.