The Dataset Viewer has been disabled on this dataset.

Dataset origin: https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-2607

Description

Corpus of texts in 12 languages. For each language, we provide one training, one development and one testing set acquired from Wikipedia articles. Moreover, each language dataset contains (substantially larger) training set collected from (general) Web texts. All sets, except for Wikipedia and Web training sets that can contain similar sentences, are disjoint. Data are segmented into sentences which are further word tokenized.

All data in the corpus contain diacritics. To strip diacritics from them, use Python script diacritization_stripping.py contained within attached stripping_diacritics.zip. This script has two modes. We generally recommend using method called uninames, which for some languages behaves better.

The code for training recurrent neural-network based model for diacritics restoration is located at https://github.com/arahusky/diacritics_restoration.

Citation

 @misc{11234/1-2607,
 title = {Corpus for training and evaluating diacritics restoration systems},
 author = {N{\'a}plava, Jakub and Straka, Milan and Haji{\v c}, Jan and Stra{\v n}{\'a}k, Pavel},
 url = {http://hdl.handle.net/11234/1-2607},
 note = {{LINDAT}/{CLARIAH}-{CZ} digital library at the Institute of Formal and Applied Linguistics ({{\'U}FAL}), Faculty of Mathematics and Physics, Charles University},
 copyright = {Creative Commons - Attribution-{NonCommercial}-{ShareAlike} 4.0 International ({CC} {BY}-{NC}-{SA} 4.0)},
 year = {2018} }
Downloads last month
2