Yassmen's picture
End of training
5be809f verified
|
raw
history blame
1.97 kB
metadata
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
  - generated_from_trainer
model-index:
  - name: speecht5_finetuned_english_tehnical
    results: []

speecht5_finetuned_english_tehnical

This model is a fine-tuned version of microsoft/speecht5_tts on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4457

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • training_steps: 1000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.5849 0.3573 100 0.5034
0.5421 0.7146 200 0.4901
0.5217 1.0719 300 0.4726
0.5052 1.4292 400 0.4641
0.4984 1.7865 500 0.4586
0.495 2.1438 600 0.4550
0.487 2.5011 700 0.4512
0.4841 2.8584 800 0.4509
0.471 3.2157 900 0.4471
0.4751 3.5730 1000 0.4457

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.1
  • Tokenizers 0.19.1