Datasets:
Tasks:
Text Generation
Formats:
parquet
Sub-tasks:
language-modeling
Languages:
Danish
Size:
1M - 10M
License:
metadata
pretty_name: Wikipedia
language:
- da
license: cc0-1.0
license_name: Creative Commons Zero v1.0 Universal
size_categories:
- 100k-1M
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
Dataset Card for Wikipedia
The Danish subsection of wikipeadia.
You can read more about wikipeadia on their about page.
Dataset Description
- Language: dan, dansk, Danish
- Number of samples: 264.50K
- Number of tokens (Llama 3): 122.00M
- Average document length (characters): 1386.64
Dataset Sturcture
An example from the dataset looks as follows.
License Information
Creative Commons Zero v1.0 Universal
Creative Commons Legal Code
CC0 1.0 Universal
Additional Information
Citation Information
This dataset was initially published as part of the Danish gigaword. We recommend that you cite and reference it if you use this dataset:
Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
@inproceedings{dagw,
title = {{The Danish Gigaword Corpus}},
author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
year = 2021,
booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
publisher = {NEALT}
}