ales commited on
Commit
13aec20
·
1 Parent(s): 5675ba6

update model card README.md

Browse files
Files changed (2) hide show
  1. README.md +13 -17
  2. train.log +2 -0
README.md CHANGED
@@ -1,41 +1,38 @@
1
  ---
2
- language:
3
- - be
4
  license: apache-2.0
5
  tags:
6
- - whisper-event
7
  - generated_from_trainer
8
  datasets:
9
- - mozilla-foundation/common_voice_11_0
10
  metrics:
11
  - wer
12
  model-index:
13
- - name: Whisper Small Belarusian
14
  results:
15
  - task:
16
  name: Automatic Speech Recognition
17
  type: automatic-speech-recognition
18
  dataset:
19
- name: mozilla-foundation/common_voice_11_0 be
20
- type: mozilla-foundation/common_voice_11_0
21
  config: be
22
  split: validation
23
  args: be
24
  metrics:
25
  - name: Wer
26
  type: wer
27
- value: 55.12820512820513
28
  ---
29
 
 
 
30
 
31
- # Whisper Tiny Belarusian
32
 
33
- Test model repository to debug train & evaluation scrips.
34
-
35
- This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the mozilla-foundation/common_voice_11_0 be dataset.
36
  It achieves the following results on the evaluation set:
37
- - Loss: 0.5382
38
- - Wer: 55.1282
39
 
40
  ## Model description
41
 
@@ -54,14 +51,13 @@ More information needed
54
  ### Training hyperparameters
55
 
56
  The following hyperparameters were used during training:
57
- - learning_rate: 3.1578947368421056e-06
58
  - train_batch_size: 32
59
  - eval_batch_size: 32
60
  - seed: 42
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: linear
63
- - lr_scheduler_warmup_steps: 5
64
- - training_steps: 150
65
  - mixed_precision_training: Native AMP
66
 
67
  ### Training results
 
1
  ---
 
 
2
  license: apache-2.0
3
  tags:
 
4
  - generated_from_trainer
5
  datasets:
6
+ - common_voice_11_0
7
  metrics:
8
  - wer
9
  model-index:
10
+ - name: whisper-tiny-be-test
11
  results:
12
  - task:
13
  name: Automatic Speech Recognition
14
  type: automatic-speech-recognition
15
  dataset:
16
+ name: common_voice_11_0
17
+ type: common_voice_11_0
18
  config: be
19
  split: validation
20
  args: be
21
  metrics:
22
  - name: Wer
23
  type: wer
24
+ value: 55.311355311355314
25
  ---
26
 
27
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
+ should probably proofread and complete it, then remove this comment. -->
29
 
30
+ # whisper-tiny-be-test
31
 
32
+ This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the common_voice_11_0 dataset.
 
 
33
  It achieves the following results on the evaluation set:
34
+ - Loss: 0.5342
35
+ - Wer: 55.3114
36
 
37
  ## Model description
38
 
 
51
  ### Training hyperparameters
52
 
53
  The following hyperparameters were used during training:
54
+ - learning_rate: 1e-05
55
  - train_batch_size: 32
56
  - eval_batch_size: 32
57
  - seed: 42
58
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
  - lr_scheduler_type: linear
60
+ - training_steps: 200
 
61
  - mixed_precision_training: Native AMP
62
 
63
  ### Training results
train.log CHANGED
@@ -81,3 +81,5 @@
81
  {'loss': 0.3533, 'learning_rate': 8.050000000000001e-06, 'epoch': 0.2}
82
  {'eval_loss': 0.530021071434021, 'eval_wer': 56.59340659340659, 'eval_runtime': 18.1912, 'eval_samples_per_second': 3.518, 'eval_steps_per_second': 0.11, 'epoch': 0.2}
83
  {'loss': 0.2844, 'learning_rate': 7.5500000000000006e-06, 'epoch': 0.25}
 
 
 
81
  {'loss': 0.3533, 'learning_rate': 8.050000000000001e-06, 'epoch': 0.2}
82
  {'eval_loss': 0.530021071434021, 'eval_wer': 56.59340659340659, 'eval_runtime': 18.1912, 'eval_samples_per_second': 3.518, 'eval_steps_per_second': 0.11, 'epoch': 0.2}
83
  {'loss': 0.2844, 'learning_rate': 7.5500000000000006e-06, 'epoch': 0.25}
84
+ {'eval_loss': 0.5341857671737671, 'eval_wer': 55.311355311355314, 'eval_runtime': 17.7172, 'eval_samples_per_second': 3.612, 'eval_steps_per_second': 0.113, 'epoch': 0.25}
85
+ {'train_runtime': 406.2172, 'train_samples_per_second': 15.755, 'train_steps_per_second': 0.492, 'train_loss': 0.0719480574131012, 'epoch': 0.25}