imedennikov
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -172,7 +172,7 @@ SentencePiece [4] tokenizer with 3072 tokens for this model was built using the
|
|
172 |
|
173 |
### Datasets
|
174 |
|
175 |
-
The model was trained on ReazonSpeech v2.0 [5] speech corpus containing more
|
176 |
|
177 |
## Performance
|
178 |
|
|
|
172 |
|
173 |
### Datasets
|
174 |
|
175 |
+
The model was trained on ReazonSpeech v2.0 [5] speech corpus containing more than 35k hours of natural Japanese speech.
|
176 |
|
177 |
## Performance
|
178 |
|