nithinraok
commited on
Commit
·
a43d5d7
1
Parent(s):
09b7359
Adding SSL finetuned model as new version
Browse files
README.md
CHANGED
@@ -63,7 +63,7 @@ model-index:
|
|
63 |
metrics:
|
64 |
- name: Test WER
|
65 |
type: wer
|
66 |
-
value:
|
67 |
- task:
|
68 |
type: Automatic Speech Recognition
|
69 |
name: automatic-speech-recognition
|
@@ -77,7 +77,7 @@ model-index:
|
|
77 |
metrics:
|
78 |
- name: Test WER
|
79 |
type: wer
|
80 |
-
value: 4.
|
81 |
- task:
|
82 |
type: Automatic Speech Recognition
|
83 |
name: automatic-speech-recognition
|
@@ -91,7 +91,7 @@ model-index:
|
|
91 |
metrics:
|
92 |
- name: Test WER
|
93 |
type: wer
|
94 |
-
value: 5.
|
95 |
- task:
|
96 |
type: Automatic Speech Recognition
|
97 |
name: automatic-speech-recognition
|
@@ -103,7 +103,7 @@ model-index:
|
|
103 |
metrics:
|
104 |
- name: Test WER
|
105 |
type: wer
|
106 |
-
value: 1.
|
107 |
- task:
|
108 |
type: Automatic Speech Recognition
|
109 |
name: automatic-speech-recognition
|
@@ -115,7 +115,7 @@ model-index:
|
|
115 |
metrics:
|
116 |
- name: Test WER
|
117 |
type: wer
|
118 |
-
value: 2.
|
119 |
- task:
|
120 |
name: Automatic Speech Recognition
|
121 |
type: automatic-speech-recognition
|
@@ -128,7 +128,7 @@ model-index:
|
|
128 |
metrics:
|
129 |
- name: Test WER
|
130 |
type: wer
|
131 |
-
value:
|
132 |
|
133 |
|
134 |
---
|
@@ -229,6 +229,7 @@ The following tables summarizes the performance of the available models in this
|
|
229 |
|**Version**|**Tokenizer**|**Vocabulary Size**|**LS test-other**|**LS test-clean**|**WSJ Eval92**|**WSJ Dev93**|**NSC Part 1**|**MLS Test**|**MCV Test 7.0**| Train Dataset |
|
230 |
|---------|-----------------------|-----------------|---------------|---------------|------------|-----------|-----|-------|------|------|
|
231 |
| 1.20.0 | SentencePiece Unigram | 1024 | 3.04 | 1.59 | 1.27 | 2.13 | 5.84 | 4.88 | 5.11 | NeMo ASRSET 3.0 |
|
|
|
232 |
|
233 |
|
234 |
## Limitations
|
|
|
63 |
metrics:
|
64 |
- name: Test WER
|
65 |
type: wer
|
66 |
+
value: 2.71
|
67 |
- task:
|
68 |
type: Automatic Speech Recognition
|
69 |
name: automatic-speech-recognition
|
|
|
77 |
metrics:
|
78 |
- name: Test WER
|
79 |
type: wer
|
80 |
+
value: 4.58
|
81 |
- task:
|
82 |
type: Automatic Speech Recognition
|
83 |
name: automatic-speech-recognition
|
|
|
91 |
metrics:
|
92 |
- name: Test WER
|
93 |
type: wer
|
94 |
+
value: 5.48
|
95 |
- task:
|
96 |
type: Automatic Speech Recognition
|
97 |
name: automatic-speech-recognition
|
|
|
103 |
metrics:
|
104 |
- name: Test WER
|
105 |
type: wer
|
106 |
+
value: 1.09
|
107 |
- task:
|
108 |
type: Automatic Speech Recognition
|
109 |
name: automatic-speech-recognition
|
|
|
115 |
metrics:
|
116 |
- name: Test WER
|
117 |
type: wer
|
118 |
+
value: 2.00
|
119 |
- task:
|
120 |
name: Automatic Speech Recognition
|
121 |
type: automatic-speech-recognition
|
|
|
128 |
metrics:
|
129 |
- name: Test WER
|
130 |
type: wer
|
131 |
+
value: 4.48
|
132 |
|
133 |
|
134 |
---
|
|
|
229 |
|**Version**|**Tokenizer**|**Vocabulary Size**|**LS test-other**|**LS test-clean**|**WSJ Eval92**|**WSJ Dev93**|**NSC Part 1**|**MLS Test**|**MCV Test 7.0**| Train Dataset |
|
230 |
|---------|-----------------------|-----------------|---------------|---------------|------------|-----------|-----|-------|------|------|
|
231 |
| 1.20.0 | SentencePiece Unigram | 1024 | 3.04 | 1.59 | 1.27 | 2.13 | 5.84 | 4.88 | 5.11 | NeMo ASRSET 3.0 |
|
232 |
+
| 1.20.1 | SentencePiece Unigram | 1024 | 2.71 | 1.50 | 1.09 | 2.00 | 4.48 | 4.32 | 5.48 | NeMo ASRSET 3.0 |
|
233 |
|
234 |
|
235 |
## Limitations
|