File size: 3,415 Bytes
33640f1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22e2bf9
 
33640f1
 
22e2bf9
 
 
 
 
 
 
 
 
 
 
33640f1
 
 
 
 
 
 
 
 
22e2bf9
 
 
 
 
 
33640f1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22e2bf9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33640f1
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
---
library_name: peft
language:
- en
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
- wft
- whisper
- automatic-speech-recognition
- audio
- speech
- generated_from_trainer
datasets:
- JacobLinCool/ami-disfluent
metrics:
- wer
model-index:
- name: whisper-large-v3-verbatim-1
  results:
  - task:
      type: automatic-speech-recognition
      name: Automatic Speech Recognition
    dataset:
      name: JacobLinCool/ami-disfluent
      type: JacobLinCool/ami-disfluent
    metrics:
    - type: wer
      value: 32.322538548713894
      name: Wer
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# whisper-large-v3-verbatim-1

This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the JacobLinCool/ami-disfluent dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1300
- Wer: 32.3225
- Cer: 45.5147
- Decode Runtime: 141.5643
- Wer Runtime: 0.1227
- Cer Runtime: 0.2049

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 1000

### Training results

| Training Loss | Epoch | Step | Validation Loss | Wer     | Cer      | Decode Runtime | Wer Runtime | Cer Runtime |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:--------:|:--------------:|:-----------:|:-----------:|
| No log        | 0     | 0    | 1.8283          | 63.2783 | 251.8035 | 164.5307       | 0.1838      | 0.3386      |
| 0.2617        | 0.1   | 100  | 0.2189          | 49.6995 | 178.3721 | 161.1098       | 0.1397      | 0.4071      |
| 0.1291        | 0.2   | 200  | 0.1452          | 50.3383 | 95.5275  | 143.0863       | 0.1342      | 0.2932      |
| 0.1418        | 0.3   | 300  | 0.1387          | 29.9186 | 74.6491  | 150.1053       | 0.0780      | 0.1514      |
| 0.1273        | 1.088 | 400  | 0.1372          | 30.8218 | 91.1134  | 166.0178       | 0.1252      | 0.2728      |
| 0.1139        | 1.188 | 500  | 0.1335          | 29.9117 | 101.9003 | 144.2796       | 0.1318      | 0.2934      |
| 0.1663        | 1.288 | 600  | 0.1306          | 31.8418 | 83.0183  | 149.9060       | 0.0826      | 0.1679      |
| 0.1275        | 2.076 | 700  | 0.1311          | 24.9665 | 29.6191  | 143.2151       | 0.0781      | 0.1135      |
| 0.1077        | 2.176 | 800  | 0.1304          | 25.9109 | 36.6217  | 143.4620       | 0.0770      | 0.1227      |
| 0.1711        | 2.276 | 900  | 0.1298          | 35.1729 | 45.0300  | 145.3294       | 0.0786      | 0.1310      |
| 0.0994        | 3.064 | 1000 | 0.1300          | 32.3225 | 45.5147  | 141.5643       | 0.1227      | 0.2049      |


### Framework versions

- PEFT 0.14.0
- Transformers 4.48.0
- Pytorch 2.4.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0