Update README.md
Browse files
README.md
CHANGED
@@ -18,7 +18,7 @@ The x-large model pre-trained on 16kHz sampled speech audio. When using the mode
|
|
18 |
|
19 |
## Model description
|
20 |
|
21 |
-
The Finnish Wav2Vec2 X-Large has the same architecture and uses the same training objective as the
|
22 |
|
23 |
You can read more about the pre-trained model from [this paper](TODO).
|
24 |
|
|
|
18 |
|
19 |
## Model description
|
20 |
|
21 |
+
The Finnish Wav2Vec2 X-Large has the same architecture and uses the same training objective as the multilingual one described in [paper](https://www.isca-archive.org/interspeech_2022/babu22_interspeech.pdf). It is pre-trained on 158k hours of unlabeled Finnish speech, including [KAVI radio and television archive materials](https://kavi.fi/en/radio-ja-televisioarkistointia-vuodesta-2008/), Lahjoita puhetta (Donate Speech), Finnish Parliament, Finnish VoxPopuli.
|
22 |
|
23 |
You can read more about the pre-trained model from [this paper](TODO).
|
24 |
|