KennethEnevoldsen commited on
Commit
ef0f90a
·
unverified ·
1 Parent(s): 936cd0c

Added overview table to the main readme

Browse files
.gitignore CHANGED
@@ -5,5 +5,9 @@ __pycache__/*
5
  # cSpell
6
  cspell.json
7
 
 
 
 
8
  # tmp files
9
- tmp.py
 
 
5
  # cSpell
6
  cspell.json
7
 
8
+ # debugfile
9
+ .vscode/launch.json
10
+
11
  # tmp files
12
+ tmp.py
13
+
.vscode/launch.json DELETED
@@ -1,16 +0,0 @@
1
- {
2
- // Use IntelliSense to learn about possible attributes.
3
- // Hover to view descriptions of existing attributes.
4
- // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
5
- "version": "0.2.0",
6
- "configurations": [
7
- {
8
- "name": "Python Debugger: Current File with Arguments",
9
- "type": "debugpy",
10
- "request": "launch",
11
- "program": "${file}",
12
- "console": "integratedTerminal",
13
- "args": "--force"
14
- }
15
- ]
16
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md CHANGED
@@ -206,6 +206,11 @@ The dataset contains text from different sources which are thoroughly defined in
206
 
207
  Each entry in the dataset consists of a single text with associated metadata
208
 
 
 
 
 
 
209
  ```py
210
  {
211
  "text": "SAMLEDE VÆRKER\n\nJEPPE AAKJÆR GYLDENDALSKE BOGHANDEL...",
@@ -252,61 +257,67 @@ This data generally contains no annotation besides the metadata attached to each
252
 
253
  Below follows a brief overview of the sources in the corpus along with their individual license.
254
 
255
- <!-- START-MAIN TABLE -->
256
- <!-- END-MAIN TABLE -->
257
 
258
- | Source | License |
259
- | ------------------- | -------------------- |
260
- | [adl] | [CC-0] |
261
- | [botxt] | [CC-0] |
262
- | [dannet] | [dannet license] |
263
- | [depbank] | [CC-BY-SA 4.0] |
264
- | [ep] | [CC-0] |
265
- | [ft] | [CC-0] |
266
- | [gutenberg] | [gutenberg license] |
267
- | [hest] | [CC-0] |
268
- | [jvj] | [CC-BY-SA 4.0] |
269
- | [naat] | [CC-0] |
270
- | [nordjyllandnews] | [CC-0] |
271
- | [relig] | [CC-0] |
272
- | [retsinformationdk] | [Other (Danish Law)] |
273
- | [retspraksis] | [CC-0] |
274
- | [skat] | [CC-0] |
275
- | [spont] | [CC-0] |
276
- | [synne] | [CC-0] |
277
- | [tv2r] | [Custom, CC-BY 4.0] |
278
- | [wiki] | [CC-0] |
279
- | [wikibooks] | [CC-0] |
280
- | [wikisource] | [CC-0] |
281
 
282
- [adl]: data/adl/adl.md
283
- [botxt]: data/botxt/botxt.md
284
- [dannet]: data/dannet/dannet.md
285
- [depbank]: data/depbank/depbank.md
286
- [ep]: data/ep/ep.md
287
- [ft]: data/ft/ft.md
288
- [gutenberg]: data/gutenberg/gutenberg.md
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
289
  [hest]: data/hest/hest.md
290
- [jvj]: data/jvj/jvj.md
291
- [naat]: data/naat/naat.md
292
- [nordjyllandnews]: data/nordjyllandnews/nordjyllandnews.md
293
- [relig]: data/relig/relig.md
294
- [retsinformationdk]: data/retsinformationdk/retsinformationdk.md
295
- [retspraksis]: data/retspraksis/retspraksis.md
296
- [skat]: data/skat/skat.md
297
  [spont]: data/spont/spont.md
298
- [synne]: data/synne/synne.md
299
- [tv2r]: data/tv2r/tv2r.md
 
 
 
 
300
  [wiki]: data/wiki/wiki.md
301
  [wikibooks]: data/wikibooks/wikibooks.md
302
- [wikisource]: data/wikisource/wikisource.md
 
 
 
 
 
 
 
 
 
 
303
 
304
  [CC-0]: https://creativecommons.org/publicdomain/zero/1.0/legalcode.en
305
  [CC-BY-SA 4.0]: https://creativecommons.org/licenses/by-sa/4.0/deed.en
306
- [Custom, CC-BY 4.0]: https://huggingface.co/datasets/danish-foundation-models/danish-gigaword-2/blob/main/data/tv2r/tv2r.md#license-information
307
- [gutenberg license]: https://www.gutenberg.org/policy/license.html
308
- [dannet license]: https://cst.ku.dk/projekter/dannet/license.txt
309
- [Other (Danish Law)]: https://huggingface.co/datasets/danish-foundation-models/danish-gigaword-2/blob/main/data/retsinformationdk/retsinformationdk.md#license-information
310
 
311
 
312
 
 
206
 
207
  Each entry in the dataset consists of a single text with associated metadata
208
 
209
+
210
+ <!-- START-SAMPLE -->
211
+ <!-- END-SAMPLE -->
212
+
213
+
214
  ```py
215
  {
216
  "text": "SAMLEDE VÆRKER\n\nJEPPE AAKJÆR GYLDENDALSKE BOGHANDEL...",
 
257
 
258
  Below follows a brief overview of the sources in the corpus along with their individual license.
259
 
 
 
260
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
261
 
262
+
263
+
264
+
265
+
266
+ <!-- START-MAIN TABLE -->
267
+ | Source | Description | N. Tokens | License |
268
+ | :------------------ | :--------------------------------------------------------------------------------------------------------------------------- | :-------- | :--------------------- |
269
+ | [retsinformationdk] | [retsinformation.dk](https://www.retsinformation.dk) (legal-information.dk) the official legal information system of Denmark | 516.54M | [Danish Copyright Law] |
270
+ | [hest] | Samples from the Danish debate forum www.heste-nettet.dk | 389.33M | [CC-0] |
271
+ | [spont] | Conversational samples collected as a part of research projects at Aarhus University | 1.56M | [CC-0] |
272
+ | [tv2r] | Contemporary Danish newswire articles published between 2010 and 2019 | 21.67M | [CC-BY-SA 4.0] |
273
+ | [ep] | The Danish subsection of [Europarl](https://aclanthology.org/2005.mtsummit-papers.11/) | 100.89M | [CC-0] |
274
+ | [gutenberg] | The Danish subsection from Project [Gutenberg](https://www.gutenberg.org) | 6.76M | [Gutenberg License] |
275
+ | [depbank] | The Danish subsection of the [Universal Dependencies Treebank](https://github.com/UniversalDependencies/UD_Danish-DDT) | 185.45K | [CC-BY-SA 4.0] |
276
+ | [jvj] | The works of the Danish author and poet, [Johannes V. Jensen](https://da.wikipedia.org/wiki/Johannes_V._Jensen) | 3.55M | [CC-BY-SA 4.0] |
277
+ | [wikisource] | The Danish subsection of [Wikisource](https://en.wikisource.org/wiki/Main_Page) | 5.34M | [CC-0] |
278
+ | [wiki] | The Danish subsection of [wikipeadia](https://en.wikipedia.org/wiki/Main_Page) | 122.00M | [CC-0] |
279
+ | [wikibooks] | The Danish Subsection of [Wikibooks](https://www.wikibooks.org) | 6.24M | [CC-0] |
280
+ | [nordjyllandnews] | Articles from the Danish Newspaper [TV2 Nord](https://www.tv2nord.dk) | 37.91M | [CC-0] |
281
+ | [adl] | Danish literature from 1700-2023 from the Archive for Danish Literature (ADL) | 58.49M | [CC-0] |
282
+ | [retspraksis] | Case law or judical practice in Denmark derived from [Retspraksis](https://da.wikipedia.org/wiki/Retspraksis) | 57.08M | [CC-0] |
283
+ | [relig] | Danish religious text from the 1700-2022 | 1.24M | [CC-0] |
284
+ | [dannet] | [DanNet](https://cst.ku.dk/projekter/dannet) is a Danish WordNet | 1.52M | [DanNet 1.0 License] |
285
+ | [synne] | Dataset collected from [synnejysk forening's website](https://www.synnejysk.dk), covering the Danish dialect sønderjysk | 52.51K | [CC-0] |
286
+ | [naat] | A dataset of Danish speeches from 1930-2022 | 286.68K | [CC-0] |
287
+ | [botxt] | The Bornholmsk Ordbog Dictionary Projec | 847.97K | [CC-0] |
288
+ | [ft] | This dataset consists of records from all meetings of The Danish parliament (Folketinget) in the parliament hall | 114.09M | [CC-0] |
289
+ | [skat] | Skat is the Danish tax authority. This dataset contains content from its website skat.dk | 122.12M | [CC-0] |
290
+ | **Total** | | 1.57B | |
291
+
292
+ [retsinformationdk]: data/retsinformationdk/retsinformationdk.md
293
  [hest]: data/hest/hest.md
 
 
 
 
 
 
 
294
  [spont]: data/spont/spont.md
295
+ [tv2r]: data/tv2r/tv2r.md
296
+ [ep]: data/ep/ep.md
297
+ [gutenberg]: data/gutenberg/gutenberg.md
298
+ [depbank]: data/depbank/depbank.md
299
+ [jvj]: data/jvj/jvj.md
300
+ [wikisource]: data/wikisource/wikisource.md
301
  [wiki]: data/wiki/wiki.md
302
  [wikibooks]: data/wikibooks/wikibooks.md
303
+ [nordjyllandnews]: data/nordjyllandnews/nordjyllandnews.md
304
+ [adl]: data/adl/adl.md
305
+ [retspraksis]: data/retspraksis/retspraksis.md
306
+ [relig]: data/relig/relig.md
307
+ [dannet]: data/dannet/dannet.md
308
+ [synne]: data/synne/synne.md
309
+ [naat]: data/naat/naat.md
310
+ [botxt]: data/botxt/botxt.md
311
+ [ft]: data/ft/ft.md
312
+ [skat]: data/skat/skat.md
313
+
314
 
315
  [CC-0]: https://creativecommons.org/publicdomain/zero/1.0/legalcode.en
316
  [CC-BY-SA 4.0]: https://creativecommons.org/licenses/by-sa/4.0/deed.en
317
+ [Danish Copyright Law]: ./data/retsinformationdk/retsinformationdk.md#license-information
318
+ [Gutenberg License]: ./data/gutenberg/gutenberg.md#license-information
319
+ [DanNet 1.0 License]: ./data/dannet/dannet.md#license-information
320
+ <!-- END-MAIN TABLE -->
321
 
322
 
323
 
data/adl/adl.md CHANGED
@@ -3,7 +3,7 @@ pretty_name: Archive for Danish Literature
3
  language:
4
  - da
5
  license: cc0-1.0
6
- license_name: Creative Commons Zero v1.0 Universal
7
  size_categories:
8
  - 1-10k
9
  task_categories:
@@ -11,13 +11,16 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
 
15
  # Dataset Card for Archive for Danish Literature
16
 
17
  ## Dataset Description
18
 
19
  <!-- START-SHORT DESCRIPTION -->
20
- Danish literature from 1700-2023 stemming for the Archive for Danish Literature (ADL).
21
  <!-- END-SHORT DESCRIPTION -->
22
 
23
 
@@ -33,44 +36,9 @@ Danish literature from 1700-2023 stemming for the Archive for Danish Literature
33
 
34
  ## Dataset Sturcture
35
  An example from the dataset looks as follows.
36
- ```yaml
37
- {
38
- 'text': 'SAMLEDE VÆRKER
39
-
40
- JEPPE AAKJÆR GYLDENDALSKE BOGHANDE',
41
- 'source': 'adl',
42
- 'id': 'adl_aakjaer06val',
43
- 'added': '2020-09-14',
44
- 'created': '1700-01-01, 2022-01-01',
45
- 'metadata': {
46
- 'domain': 'Wiki & Books',
47
- 'license': 'Creative Commons Legal Code
48
-
49
- CC0 1.0 Universal',
50
- 'source-pretty': ' Archive for Danish Literature'
51
- }
52
- }
53
- ```
54
-
55
- ## Data Fields
56
-
57
- - **id**: source-specific identifier.
58
- - **text**: textual content of the document.
59
- - **source**: source of the data.
60
- - **added**: timestamp ai2 acquired this data.
61
- - **created**": timestamp when original document was created (best-guess if not available)
62
- - **metadata**: source-specific metadata.
63
-
64
- ## License Information
65
- <details>
66
- <summary>Creative Commons Zero v1.0 Universal</summary>
67
- <p>
68
- Creative Commons Legal Code
69
-
70
- CC0 1.0 Universal
71
- </p>
72
- </details>
73
 
 
 
74
 
75
 
76
  ## Additional Information
 
3
  language:
4
  - da
5
  license: cc0-1.0
6
+ license_name: CC-0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
+ source_datasets:
15
+ - danish-foundation-models/danish-gigaword
16
  ---
17
+
18
  # Dataset Card for Archive for Danish Literature
19
 
20
  ## Dataset Description
21
 
22
  <!-- START-SHORT DESCRIPTION -->
23
+ Danish literature from 1700-2023 from the Archive for Danish Literature (ADL).
24
  <!-- END-SHORT DESCRIPTION -->
25
 
26
 
 
36
 
37
  ## Dataset Sturcture
38
  An example from the dataset looks as follows.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
 
40
+ <!-- START-SAMPLE -->
41
+ <!-- END-SAMPLE -->
42
 
43
 
44
  ## Additional Information
data/botxt/botxt.md CHANGED
@@ -3,7 +3,7 @@ pretty_name: Bornholmsk
3
  language:
4
  - da
5
  license: cc0-1.0
6
- license_name: Creative Commons Zero v1.0 Universal
7
  size_categories:
8
  - 1-10k
9
  task_categories:
@@ -11,7 +11,10 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
 
15
  # Dataset Card for Bornholmsk
16
 
17
  ## Dataset Description
 
3
  language:
4
  - da
5
  license: cc0-1.0
6
+ license_name: CC-0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
+ source_datasets:
15
+ - danish-foundation-models/danish-gigaword
16
  ---
17
+
18
  # Dataset Card for Bornholmsk
19
 
20
  ## Dataset Description
data/dannet/dannet.md CHANGED
@@ -11,7 +11,10 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
 
15
  # Dataset Card for DanNet
16
 
17
  <!-- START-SHORT DESCRIPTION -->
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
+ source_datasets:
15
+ - danish-foundation-models/danish-gigaword
16
  ---
17
+
18
  # Dataset Card for DanNet
19
 
20
  <!-- START-SHORT DESCRIPTION -->
data/depbank/depbank.md CHANGED
@@ -3,7 +3,7 @@ pretty_name: Danish Dependency Treebank
3
  language:
4
  - da
5
  license: cc-by-sa-4.0
6
- license_name: Creative Commons Attribution Share Alike 4.0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
@@ -11,7 +11,10 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
 
15
  # Dataset Card for Danish Dependency Treebank
16
 
17
  <!-- START-SHORT DESCRIPTION -->
 
3
  language:
4
  - da
5
  license: cc-by-sa-4.0
6
+ license_name: CC-BY-SA 4.0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
+ source_datasets:
15
+ - danish-foundation-models/danish-gigaword
16
  ---
17
+
18
  # Dataset Card for Danish Dependency Treebank
19
 
20
  <!-- START-SHORT DESCRIPTION -->
data/ep/ep.md CHANGED
@@ -3,7 +3,7 @@ pretty_name: European Parliament
3
  language:
4
  - da
5
  license: cc0-1.0
6
- license_name: Creative Commons Zero v1.0 Universal
7
  size_categories:
8
  - 1-10k
9
  task_categories:
@@ -11,7 +11,10 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
 
15
  # Dataset Card for European Parliament
16
 
17
  <!-- START-SHORT DESCRIPTION -->
 
3
  language:
4
  - da
5
  license: cc0-1.0
6
+ license_name: CC-0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
+ source_datasets:
15
+ - danish-foundation-models/danish-gigaword
16
  ---
17
+
18
  # Dataset Card for European Parliament
19
 
20
  <!-- START-SHORT DESCRIPTION -->
data/ft/ft.md CHANGED
@@ -3,7 +3,7 @@ pretty_name: Folketinget
3
  language:
4
  - da
5
  license: cc0-1.0
6
- license_name: Creative Commons Zero v1.0 Universal
7
  size_categories:
8
  - 1-10k
9
  task_categories:
@@ -11,7 +11,10 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
 
15
  # Dataset Card for Folketinget
16
 
17
  ## Dataset Description
 
3
  language:
4
  - da
5
  license: cc0-1.0
6
+ license_name: CC-0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
+ source_datasets:
15
+ - danish-foundation-models/danish-gigaword
16
  ---
17
+
18
  # Dataset Card for Folketinget
19
 
20
  ## Dataset Description
data/gutenberg/gutenberg.md CHANGED
@@ -11,13 +11,16 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
 
15
  # Dataset Card for Gutenberg
16
 
17
  ## Dataset Description
18
 
19
  <!-- START-SHORT DESCRIPTION -->
20
- This dataset contains the Danish subsection from Project [Gutenberg](https://www.gutenberg.org).
21
  <!-- END-SHORT DESCRIPTION -->
22
 
23
 
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
+ source_datasets:
15
+ - danish-foundation-models/danish-gigaword
16
  ---
17
+
18
  # Dataset Card for Gutenberg
19
 
20
  ## Dataset Description
21
 
22
  <!-- START-SHORT DESCRIPTION -->
23
+ The Danish subsection from Project [Gutenberg](https://www.gutenberg.org).
24
  <!-- END-SHORT DESCRIPTION -->
25
 
26
 
data/hest/hest.md CHANGED
@@ -3,7 +3,7 @@ pretty_name: Hestenettet (Danish debate forum)
3
  language:
4
  - da
5
  license: cc0-1.0
6
- license_name: Creative Commons Zero v1.0 Universal
7
  size_categories:
8
  - 10k-100k
9
  task_categories:
@@ -11,11 +11,14 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
 
15
  # Dataset Card for Hestenettet
16
 
17
  <!-- START-SHORT DESCRIPTION -->
18
- Extracts from www.heste-nettet.dk a Danish debate forum.
19
  <!-- END-SHORT DESCRIPTION -->
20
 
21
 
@@ -37,44 +40,9 @@ Its inclusion as training data for large language models have multiple times rea
37
 
38
  ## Dataset Sturcture
39
  An example from the dataset looks as follows.
40
- ```yaml
41
- {
42
- 'text': 'Er den ikke kær?
43
- Jeg kan ikke forstå at der altid',
44
- 'source': 'hest',
45
- 'id': 'hest_forum112802271280227_0',
46
- 'added': '2020-10-05',
47
- 'created': '2000-01-01, 2022-01-01',
48
- 'metadata': {
49
- 'domain': 'Social Media',
50
- 'license': 'Creative Commons Legal Code
51
-
52
- CC0 1.0 Universal',
53
- 'source-pretty': 'Hestenettet (Danish debate forum)'
54
- }
55
- }
56
- ```
57
-
58
- ## Data Fields
59
-
60
- - **id**: source-specific identifier.
61
- - **text**: textual content of the document.
62
- - **source**: source of the data.
63
- - **added**: timestamp ai2 acquired this data.
64
- - **created**": timestamp when original document was created (best-guess if not available)
65
- - **metadata**: source-specific metadata.
66
-
67
- ## License Information
68
- <details>
69
- <summary>Creative Commons Zero v1.0 Universal</summary>
70
- <p>
71
- Creative Commons Legal Code
72
-
73
- CC0 1.0 Universal
74
- </p>
75
- </details>
76
-
77
 
 
 
78
 
79
  ## Additional Information
80
 
 
3
  language:
4
  - da
5
  license: cc0-1.0
6
+ license_name: CC-0
7
  size_categories:
8
  - 10k-100k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
+ source_datasets:
15
+ - danish-foundation-models/danish-gigaword
16
  ---
17
+
18
  # Dataset Card for Hestenettet
19
 
20
  <!-- START-SHORT DESCRIPTION -->
21
+ Samples from the Danish debate forum www.heste-nettet.dk.
22
  <!-- END-SHORT DESCRIPTION -->
23
 
24
 
 
40
 
41
  ## Dataset Sturcture
42
  An example from the dataset looks as follows.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
 
44
+ <!-- START-SAMPLE -->
45
+ <!-- END-SAMPLE -->
46
 
47
  ## Additional Information
48
 
data/jvj/jvj.md CHANGED
@@ -3,7 +3,7 @@ pretty_name: Johannes V. Jensen
3
  language:
4
  - da
5
  license: cc-by-sa-4.0
6
- license_name: Creative Commons Attribution Share Alike 4.0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
@@ -11,7 +11,10 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
 
15
  # Dataset Card for Johannes V. Jensen
16
 
17
  <!-- START-SHORT DESCRIPTION -->
@@ -37,37 +40,9 @@ The works of the Danish author and poet, [Johannes V. Jensen](https://da.wikiped
37
 
38
  ## Dataset Sturcture
39
  An example from the dataset looks as follows.
40
- ```yaml
41
- {
42
- 'text': 'JØRGINE JØRGINE KØBENHAVN HAGE & CLAUSENS FORLAG (',
43
- 'source': 'jvj',
44
- 'id': 'jvj_Jørgine',
45
- 'added': '2020-06-26',
46
- 'created': '1873-01-01, 1951-01-01',
47
- 'metadata': {
48
- 'domain': 'Wiki & Books',
49
- 'license': 'Attribution-ShareAlike 4.0 International',
50
- 'source-pretty': 'Johannes V. Jensen (Danish poet)'
51
- }
52
- }
53
- ```
54
 
55
- ## Data Fields
56
-
57
- - **id**: source-specific identifier.
58
- - **text**: textual content of the document.
59
- - **source**: source of the data.
60
- - **added**: timestamp ai2 acquired this data.
61
- - **created**": timestamp when original document was created (best-guess if not available)
62
- - **metadata**: source-specific metadata.
63
-
64
- ## License Information
65
- <details>
66
- <summary>Creative Commons Attribution Share Alike 4.0</summary>
67
- <p>
68
- Attribution-ShareAlike 4.0 International
69
- </p>
70
- </details>
71
 
72
 
73
  ## Additional Information
 
3
  language:
4
  - da
5
  license: cc-by-sa-4.0
6
+ license_name: CC-BY-SA 4.0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
+ source_datasets:
15
+ - danish-foundation-models/danish-gigaword
16
  ---
17
+
18
  # Dataset Card for Johannes V. Jensen
19
 
20
  <!-- START-SHORT DESCRIPTION -->
 
40
 
41
  ## Dataset Sturcture
42
  An example from the dataset looks as follows.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
 
44
+ <!-- START-SAMPLE -->
45
+ <!-- END-SAMPLE -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
 
47
 
48
  ## Additional Information
data/naat/naat.md CHANGED
@@ -3,7 +3,7 @@ pretty_name: NAAT
3
  language:
4
  - da
5
  license: cc0-1.0
6
- license_name: Creative Commons Zero v1.0 Universal
7
  size_categories:
8
  - 1-10k
9
  task_categories:
@@ -11,7 +11,10 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
 
15
  # Dataset Card for NAAT
16
 
17
  <!-- START-SHORT DESCRIPTION -->
 
3
  language:
4
  - da
5
  license: cc0-1.0
6
+ license_name: CC-0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
+ source_datasets:
15
+ - danish-foundation-models/danish-gigaword
16
  ---
17
+
18
  # Dataset Card for NAAT
19
 
20
  <!-- START-SHORT DESCRIPTION -->
data/nordjyllandnews/nordjyllandnews.md CHANGED
@@ -3,7 +3,7 @@ pretty_name: Nordjylland News
3
  language:
4
  - da
5
  license: cc0-1.0
6
- license_name: Creative Commons Zero v1.0 Universal
7
  size_categories:
8
  - 10-100k
9
  task_categories:
@@ -11,12 +11,14 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
15
 
16
  # Dataset Card for Nordjylland News
17
 
18
  <!-- START-SHORT DESCRIPTION -->
19
- Articles from Danish Newspaper [TV2 Nord](https://www.tv2nord.dk).
20
  <!-- END-SHORT DESCRIPTION -->
21
 
22
 
 
3
  language:
4
  - da
5
  license: cc0-1.0
6
+ license_name: CC-0
7
  size_categories:
8
  - 10-100k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
+ source_datasets:
15
+ - alexandrainst/nordjylland-news-summarization
16
  ---
17
 
18
  # Dataset Card for Nordjylland News
19
 
20
  <!-- START-SHORT DESCRIPTION -->
21
+ Articles from the Danish Newspaper [TV2 Nord](https://www.tv2nord.dk).
22
  <!-- END-SHORT DESCRIPTION -->
23
 
24
 
data/relig/relig.md CHANGED
@@ -3,7 +3,7 @@ pretty_name: Religious texts
3
  language:
4
  - da
5
  license: cc0-1.0
6
- license_name: Creative Commons Zero v1.0 Universal
7
  size_categories:
8
  - 1-10k
9
  task_categories:
@@ -11,7 +11,10 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
 
15
  # Dataset Card for Religious texts
16
 
17
  <!-- START-SHORT DESCRIPTION -->
 
3
  language:
4
  - da
5
  license: cc0-1.0
6
+ license_name: CC-0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
+ source_datasets:
15
+ - danish-foundation-models/danish-gigaword
16
  ---
17
+
18
  # Dataset Card for Religious texts
19
 
20
  <!-- START-SHORT DESCRIPTION -->
data/retsinformationdk/retsinformationdk.md CHANGED
@@ -11,11 +11,14 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
 
15
  # Dataset Card for retsinformation.dk (Danish legal information)
16
 
17
  <!-- START-SHORT DESCRIPTION -->
18
- [retsinformation.dk](https://www.retsinformation.dk) (legal-information.dk) is the official legal information system of Denmark.
19
  <!-- END-SHORT DESCRIPTION -->
20
 
21
 
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
+ source_datasets:
15
+ - danish-foundation-models/danish-gigaword
16
  ---
17
+
18
  # Dataset Card for retsinformation.dk (Danish legal information)
19
 
20
  <!-- START-SHORT DESCRIPTION -->
21
+ [retsinformation.dk](https://www.retsinformation.dk) (legal-information.dk) the official legal information system of Denmark.
22
  <!-- END-SHORT DESCRIPTION -->
23
 
24
 
data/retspraksis/retspraksis.md CHANGED
@@ -3,7 +3,7 @@ pretty_name: retspraksis (Danish legal information)
3
  language:
4
  - da
5
  license: cc0-1.0
6
- license_name: Creative Commons Zero v1.0 Universal
7
  size_categories:
8
  - 1-10k
9
  task_categories:
@@ -11,11 +11,13 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
15
  # Dataset Card for retspraksis
16
 
17
  <!-- START-SHORT DESCRIPTION -->
18
- [Retspraksis](https://da.wikipedia.org/wiki/Retspraksis) refers to case law or judicial practice in Denmark.
19
  <!-- END-SHORT DESCRIPTION -->
20
 
21
 
 
3
  language:
4
  - da
5
  license: cc0-1.0
6
+ license_name: CC-0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
+ source_datasets:
15
+ - danish-foundation-models/danish-gigaword
16
  ---
17
  # Dataset Card for retspraksis
18
 
19
  <!-- START-SHORT DESCRIPTION -->
20
+ Case law or judical practice in Denmark derived from [Retspraksis](https://da.wikipedia.org/wiki/Retspraksis).
21
  <!-- END-SHORT DESCRIPTION -->
22
 
23
 
data/skat/skat.md CHANGED
@@ -3,7 +3,7 @@ pretty_name: skat.dk
3
  language:
4
  - da
5
  license: cc0-1.0
6
- license_name: Creative Commons Zero v1.0 Universal
7
  size_categories:
8
  - 10k-100k
9
  task_categories:
@@ -11,6 +11,8 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
15
  # Dataset Card for skat.dk
16
 
 
3
  language:
4
  - da
5
  license: cc0-1.0
6
+ license_name: CC-0
7
  size_categories:
8
  - 10k-100k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
+ source_datasets:
15
+ - danish-foundation-models/danish-gigaword
16
  ---
17
  # Dataset Card for skat.dk
18
 
data/spont/spont.md CHANGED
@@ -3,7 +3,7 @@ pretty_name: Spontaneous speech
3
  language:
4
  - da
5
  license: cc0-1.0
6
- license_name: Creative Commons Zero v1.0 Universal
7
  size_categories:
8
  - 1-10k
9
  task_categories:
@@ -11,11 +11,13 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
15
  # Dataset Card for Spontaneous speech
16
 
17
  <!-- START-SHORT DESCRIPTION -->
18
- A corpora of conversational data originally collected as a part of research projects at Aarhus University.
19
  <!-- END-SHORT DESCRIPTION -->
20
 
21
 
 
3
  language:
4
  - da
5
  license: cc0-1.0
6
+ license_name: CC-0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
+ source_datasets:
15
+ - danish-foundation-models/danish-gigaword
16
  ---
17
  # Dataset Card for Spontaneous speech
18
 
19
  <!-- START-SHORT DESCRIPTION -->
20
+ Conversational samples collected as a part of research projects at Aarhus University.
21
  <!-- END-SHORT DESCRIPTION -->
22
 
23
 
data/synne/synne.md CHANGED
@@ -3,7 +3,7 @@ pretty_name: Synnejysk Forening
3
  language:
4
  - da
5
  license: cc0-1.0
6
- license_name: Creative Commons Zero v1.0 Universal
7
  size_categories:
8
  - 1-10k
9
  task_categories:
@@ -11,6 +11,8 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
15
  # Dataset Card for synnejysk Forening
16
 
 
3
  language:
4
  - da
5
  license: cc0-1.0
6
+ license_name: CC-0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
+ source_datasets:
15
+ - danish-foundation-models/danish-gigaword
16
  ---
17
  # Dataset Card for synnejysk Forening
18
 
data/tv2r/tv2r.md CHANGED
@@ -1,9 +1,9 @@
1
  ---
2
- pretty_name: TV 2 Radio (Danish news)
3
  language:
4
  - da
5
  license: cc-by-sa-4.0
6
- license_name: Creative Commons Attribution Share Alike 4.0
7
  size_categories:
8
  - 10k-100k
9
  task_categories:
@@ -11,13 +11,15 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
15
- # Dataset Card for TV 2 Radio (Danish news)
16
 
17
  ## Dataset Description
18
 
19
  <!-- START-SHORT DESCRIPTION -->
20
- This dataset includes contemporary Danish newswire articles published between 2010 and 2019.
21
  <!-- END-SHORT DESCRIPTION -->
22
 
23
 
@@ -41,7 +43,7 @@ An example from the dataset looks as follows.
41
 
42
  ## License Information
43
  <details>
44
- <summary>Creative Commons Attribution Share Alike 4.0</summary>
45
  <p>
46
  The owner of this content is TV2 Regionerne, Denmark.
47
  Creative Commons Attribution 4.0 International
 
1
  ---
2
+ pretty_name: TV 2 Radio
3
  language:
4
  - da
5
  license: cc-by-sa-4.0
6
+ license_name: CC-BY-SA 4.0
7
  size_categories:
8
  - 10k-100k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
+ source_datasets:
15
+ - danish-foundation-models/danish-gigaword
16
  ---
17
+ # Dataset Card for TV 2 Radio
18
 
19
  ## Dataset Description
20
 
21
  <!-- START-SHORT DESCRIPTION -->
22
+ Contemporary Danish newswire articles published between 2010 and 2019.
23
  <!-- END-SHORT DESCRIPTION -->
24
 
25
 
 
43
 
44
  ## License Information
45
  <details>
46
+ <summary>CC-BY-SA 4.0</summary>
47
  <p>
48
  The owner of this content is TV2 Regionerne, Denmark.
49
  Creative Commons Attribution 4.0 International
data/wiki/wiki.md CHANGED
@@ -3,7 +3,7 @@ pretty_name: Wikipedia
3
  language:
4
  - da
5
  license: cc0-1.0
6
- license_name: Creative Commons Zero v1.0 Universal
7
  size_categories:
8
  - 100k-1M
9
  task_categories:
@@ -11,6 +11,8 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
15
  # Dataset Card for Wikipedia
16
 
@@ -39,16 +41,6 @@ An example from the dataset looks as follows.
39
  <!-- START-SAMPLE -->
40
  <!-- END-SAMPLE -->
41
 
42
- ## License Information
43
- <details>
44
- <summary>Creative Commons Zero v1.0 Universal</summary>
45
- <p>
46
- Creative Commons Legal Code
47
-
48
- CC0 1.0 Universal
49
- </p>
50
- </details>
51
-
52
 
53
  ## Additional Information
54
 
 
3
  language:
4
  - da
5
  license: cc0-1.0
6
+ license_name: CC-0
7
  size_categories:
8
  - 100k-1M
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
+ source_datasets:
15
+ - danish-foundation-models/danish-gigaword
16
  ---
17
  # Dataset Card for Wikipedia
18
 
 
41
  <!-- START-SAMPLE -->
42
  <!-- END-SAMPLE -->
43
 
 
 
 
 
 
 
 
 
 
 
44
 
45
  ## Additional Information
46
 
data/wikibooks/wikibooks.md CHANGED
@@ -3,7 +3,7 @@ pretty_name: Wikibooks
3
  language:
4
  - da
5
  license: cc0-1.0
6
- license_name: Creative Commons Zero v1.0 Universal
7
  size_categories:
8
  - 1-10k
9
  task_categories:
@@ -11,12 +11,14 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
15
 
16
  # Dataset Card for Wikibooks
17
 
18
  <!-- START-SHORT DESCRIPTION -->
19
- The Danish Subsection of [**Wikibooks**](https://www.wikibooks.org).
20
  <!-- END-SHORT DESCRIPTION -->
21
 
22
 
@@ -40,7 +42,7 @@ An example from the dataset looks as follows.
40
 
41
  ## License Information
42
  <details>
43
- <summary>Creative Commons Zero v1.0 Universal</summary>
44
  <p>
45
  Creative Commons Legal Code
46
 
 
3
  language:
4
  - da
5
  license: cc0-1.0
6
+ license_name: CC-0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
+ source_datasets:
15
+ - danish-foundation-models/danish-gigaword
16
  ---
17
 
18
  # Dataset Card for Wikibooks
19
 
20
  <!-- START-SHORT DESCRIPTION -->
21
+ The Danish Subsection of [Wikibooks](https://www.wikibooks.org).
22
  <!-- END-SHORT DESCRIPTION -->
23
 
24
 
 
42
 
43
  ## License Information
44
  <details>
45
+ <summary>CC-0</summary>
46
  <p>
47
  Creative Commons Legal Code
48
 
data/wikisource/wikisource.md CHANGED
@@ -3,7 +3,7 @@ pretty_name: Wikisource
3
  language:
4
  - da
5
  license: cc0-1.0
6
- license_name: Creative Commons Zero v1.0 Universal
7
  size_categories:
8
  - 1-10k
9
  task_categories:
@@ -11,6 +11,8 @@ task_categories:
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
 
 
14
  ---
15
  # Dataset Card for Wikisource
16
 
 
3
  language:
4
  - da
5
  license: cc0-1.0
6
+ license_name: CC-0
7
  size_categories:
8
  - 1-10k
9
  task_categories:
 
11
  - fill-mask
12
  task_ids:
13
  - language-modeling
14
+ source_datasets:
15
+ - danish-foundation-models/danish-gigaword
16
  ---
17
  # Dataset Card for Wikisource
18
 
makefile CHANGED
@@ -14,8 +14,8 @@ lint:
14
 
15
  bump-version:
16
  @echo "--- 🚀 Bumping patch version ---"
17
- uv run scripts/bump_version.py
18
 
19
  update-descriptive-statistics:
20
  @echo "--- 🚀 Recomputing Descriptive statistics ---"
21
- uv run scripts/update_descriptive_statistics.py
 
14
 
15
  bump-version:
16
  @echo "--- 🚀 Bumping patch version ---"
17
+ uv run src/bump_version.py
18
 
19
  update-descriptive-statistics:
20
  @echo "--- 🚀 Recomputing Descriptive statistics ---"
21
+ uv run src/update_descriptive_statistics.py
pyproject.toml CHANGED
@@ -15,6 +15,7 @@ dependencies = [
15
  "pytest>=8.3.4",
16
  "ruff>=0.8.3",
17
  "seaborn>=0.13.2",
 
18
  "tomlkit>=0.13.2",
19
  "transformers>=4.47.1",
20
  ]
 
15
  "pytest>=8.3.4",
16
  "ruff>=0.8.3",
17
  "seaborn>=0.13.2",
18
+ "tabulate>=0.9.0",
19
  "tomlkit>=0.13.2",
20
  "transformers>=4.47.1",
21
  ]
src/tests/readme_parsing.py CHANGED
@@ -15,7 +15,7 @@ def read_frontmatter_and_body(file_path: Path) -> tuple[dict[str, Any], str]:
15
  raise ValueError(f"No frontmatter found in file: {file_path}")
16
 
17
 
18
- def get_tag_idx(readme: str, tag: str):
19
  tag_start = f"<!-- START-{tag} -->"
20
  tag_end = f"<!-- END-{tag} -->"
21
  start_idx = readme.find(tag_start)
@@ -23,3 +23,22 @@ def get_tag_idx(readme: str, tag: str):
23
  if end_idx != -1 and start_idx != -1 and start_idx < end_idx:
24
  return start_idx, end_idx
25
  raise ValueError(f"tag ({tag}) not found in readme")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  raise ValueError(f"No frontmatter found in file: {file_path}")
16
 
17
 
18
+ def get_tag_idx(readme: str, tag: str) -> tuple[int, int]:
19
  tag_start = f"<!-- START-{tag} -->"
20
  tag_end = f"<!-- END-{tag} -->"
21
  start_idx = readme.find(tag_start)
 
23
  if end_idx != -1 and start_idx != -1 and start_idx < end_idx:
24
  return start_idx, end_idx
25
  raise ValueError(f"tag ({tag}) not found in readme")
26
+
27
+
28
+ def get_tag_content(readme: str, tag: str) -> str:
29
+ s, e = get_tag_idx(readme, tag=tag)
30
+ tag_start = f"<!-- START-{tag} -->"
31
+ return readme[s + len(tag_start) : e].strip()
32
+
33
+
34
+ def replace_tag(markdown: str, package: str, tag: str) -> str:
35
+ tag_start = f"<!-- START-{tag} -->"
36
+ tag_end = f"<!-- END-{tag} -->"
37
+
38
+ if markdown.count(tag_start) != 1 or markdown.count(tag_end) != 1:
39
+ raise ValueError("Markers should appear exactly once in the markdown.")
40
+
41
+ start_md, _, remainder = markdown.partition(tag_start)
42
+ _, _, end_md = remainder.partition(tag_end)
43
+
44
+ return f"{start_md}\n{tag_start}\n{package.strip()}\n{tag_end}\n{end_md}"
src/update_descriptive_statistics.py CHANGED
@@ -16,10 +16,13 @@ from pathlib import Path
16
  from textwrap import dedent
17
  from typing import Self, cast
18
 
 
19
  from datasets import Dataset, load_dataset
20
- from git_utilities import check_is_ancestor, get_current_revision, get_latest_revision
21
  from transformers import AutoTokenizer
22
 
 
 
 
23
  logger = logging.getLogger(__name__)
24
 
25
  repo_path = Path(__file__).parent.parent
@@ -97,17 +100,9 @@ class DescriptiveStatsOverview:
97
  return format
98
 
99
  def add_to_markdown(self, markdown: str) -> str:
100
- start_identifier = "<!-- START-DESC-STATS -->"
101
- end_identifier = "<!-- END-DESC-STATS -->"
102
-
103
- if markdown.count(start_identifier) != 1 or markdown.count(end_identifier) != 1:
104
- raise ValueError("Markers should appear exactly once in the markdown.")
105
-
106
- start_md, _, remainder = markdown.partition(start_identifier)
107
- _, _, end_md = remainder.partition(end_identifier)
108
-
109
- stats = self.to_markdown()
110
- return f"{start_md}{start_identifier}{stats}{end_identifier}{end_md}"
111
 
112
  def to_disk(self, path: Path):
113
  data = self.__dict__
@@ -115,6 +110,15 @@ class DescriptiveStatsOverview:
115
  with path.with_suffix(".json").open("w") as f:
116
  json.dump(self.__dict__, f)
117
 
 
 
 
 
 
 
 
 
 
118
 
119
  def update_statitics(
120
  dataset_path: Path,
@@ -183,9 +187,66 @@ def create_parser():
183
  )
184
  return parser
185
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
186
  def update_main_table(repo_path: Path = repo_path):
187
- main_readme = repo_path / "README.md"
188
- get_tag_idx()
 
 
 
 
 
 
 
189
 
190
 
191
  def main(
@@ -206,6 +267,7 @@ def main(
206
  update_statitics(dataset_path, dataset_path.name, force=force)
207
 
208
  update_statitics(repo_path, "default", "README.md", force=force)
 
209
 
210
 
211
  if __name__ == "__main__":
 
16
  from textwrap import dedent
17
  from typing import Self, cast
18
 
19
+ import pandas as pd
20
  from datasets import Dataset, load_dataset
 
21
  from transformers import AutoTokenizer
22
 
23
+ from git_utilities import check_is_ancestor, get_current_revision, get_latest_revision
24
+ from tests.readme_parsing import get_tag_content, read_frontmatter_and_body, replace_tag
25
+
26
  logger = logging.getLogger(__name__)
27
 
28
  repo_path = Path(__file__).parent.parent
 
100
  return format
101
 
102
  def add_to_markdown(self, markdown: str) -> str:
103
+ return replace_tag(
104
+ markdown=markdown, package=self.to_markdown(), tag="DESC-STATS"
105
+ )
 
 
 
 
 
 
 
 
106
 
107
  def to_disk(self, path: Path):
108
  data = self.__dict__
 
110
  with path.with_suffix(".json").open("w") as f:
111
  json.dump(self.__dict__, f)
112
 
113
+ @classmethod
114
+ def from_disk(cls, path: Path):
115
+ with path.open("r") as f:
116
+ data = json.load(f)
117
+ if "revision" in data:
118
+ data.pop("revision")
119
+ obj = cls(**data)
120
+ return obj
121
+
122
 
123
  def update_statitics(
124
  dataset_path: Path,
 
187
  )
188
  return parser
189
 
190
+
191
+ def create_main_table(repo_path: Path = repo_path) -> pd.DataFrame:
192
+ datasets = (repo_path / "data").glob("*")
193
+
194
+ table = {
195
+ "Source": [],
196
+ "Description": [],
197
+ # "Domain": [], # TODO Add domain
198
+ "N. Tokens": [],
199
+ "License": [],
200
+ }
201
+ readme_references = ""
202
+ license_references = (
203
+ "[CC-0]: https://creativecommons.org/publicdomain/zero/1.0/legalcode.en\n"
204
+ + "[CC-BY-SA 4.0]: https://creativecommons.org/licenses/by-sa/4.0/deed.en\n"
205
+ )
206
+
207
+ for dataset in datasets:
208
+ readme_path = dataset / f"{dataset.name}.md"
209
+ frontmatter, body = read_frontmatter_and_body(readme_path)
210
+ desc_stats = DescriptiveStatsOverview.from_disk(
211
+ dataset / "descriptive_stats.json"
212
+ )
213
+
214
+ short_description = get_tag_content(body, tag="SHORT DESCRIPTION").strip()[:-1] # to exclude "."
215
+ license, license_name = frontmatter["license"], frontmatter["license_name"]
216
+
217
+ table["Source"] += [f"[{dataset.name}]"]
218
+ readme_references += (
219
+ f"[{dataset.name}]: data/{dataset.name}/{dataset.name}.md\n"
220
+ )
221
+
222
+ table["License"] += [f"[{license_name}]"]
223
+ if license == "other":
224
+ license_references += f"[{license_name}]: ./data/{dataset.name}/{dataset.name}.md#license-information\n"
225
+ table["Description"] += [short_description]
226
+ table["N. Tokens"] += [desc_stats.number_of_tokens]
227
+
228
+ # total
229
+ table["Source"] += ["**Total**"]
230
+ # table["Domain"] += [""]
231
+ table["License"] += [""]
232
+ table["Description"] += [""]
233
+ table["N. Tokens"] += [sum(table["N. Tokens"])]
234
+
235
+ df = pd.DataFrame.from_dict(table)
236
+ df["N. Tokens"] = df["N. Tokens"].apply(human_readable_large_int)
237
+ return df, readme_references, license_references
238
+
239
+
240
  def update_main_table(repo_path: Path = repo_path):
241
+ logging.info("Updating MAIN TABLE")
242
+ main_table, readme_references, license_references = create_main_table(repo_path)
243
+ readme_path = repo_path / "README.md"
244
+ with readme_path.open("r") as f:
245
+ markdown = f.read()
246
+ package = f"{main_table.to_markdown(index=False)}\n\n{readme_references}\n\n{license_references}\n\n"
247
+ markdown = replace_tag(markdown, package=package, tag="MAIN TABLE")
248
+ with readme_path.open("w") as f:
249
+ f.write(markdown)
250
 
251
 
252
  def main(
 
267
  update_statitics(dataset_path, dataset_path.name, force=force)
268
 
269
  update_statitics(repo_path, "default", "README.md", force=force)
270
+ update_main_table(repo_path)
271
 
272
 
273
  if __name__ == "__main__":
uv.lock CHANGED
@@ -268,6 +268,7 @@ dependencies = [
268
  { name = "pytest" },
269
  { name = "ruff" },
270
  { name = "seaborn" },
 
271
  { name = "tomlkit" },
272
  { name = "transformers" },
273
  ]
@@ -284,6 +285,7 @@ requires-dist = [
284
  { name = "pytest", specifier = ">=8.3.4" },
285
  { name = "ruff", specifier = ">=0.8.3" },
286
  { name = "seaborn", specifier = ">=0.13.2" },
 
287
  { name = "tomlkit", specifier = ">=0.13.2" },
288
  { name = "transformers", specifier = ">=4.47.1" },
289
  ]
@@ -1477,6 +1479,15 @@ wheels = [
1477
  { url = "https://files.pythonhosted.org/packages/1d/eb/cb8b01f5edf8f135eb3d0553d159db113a35b2948d0e51eeb735e7ae09ea/statsmodels-0.14.4-cp313-cp313-win_amd64.whl", hash = "sha256:81030108d27aecc7995cac05aa280cf8c6025f6a6119894eef648997936c2dd0", size = 9817574 },
1478
  ]
1479
 
 
 
 
 
 
 
 
 
 
1480
  [[package]]
1481
  name = "tokenizers"
1482
  version = "0.21.0"
 
268
  { name = "pytest" },
269
  { name = "ruff" },
270
  { name = "seaborn" },
271
+ { name = "tabulate" },
272
  { name = "tomlkit" },
273
  { name = "transformers" },
274
  ]
 
285
  { name = "pytest", specifier = ">=8.3.4" },
286
  { name = "ruff", specifier = ">=0.8.3" },
287
  { name = "seaborn", specifier = ">=0.13.2" },
288
+ { name = "tabulate", specifier = ">=0.9.0" },
289
  { name = "tomlkit", specifier = ">=0.13.2" },
290
  { name = "transformers", specifier = ">=4.47.1" },
291
  ]
 
1479
  { url = "https://files.pythonhosted.org/packages/1d/eb/cb8b01f5edf8f135eb3d0553d159db113a35b2948d0e51eeb735e7ae09ea/statsmodels-0.14.4-cp313-cp313-win_amd64.whl", hash = "sha256:81030108d27aecc7995cac05aa280cf8c6025f6a6119894eef648997936c2dd0", size = 9817574 },
1480
  ]
1481
 
1482
+ [[package]]
1483
+ name = "tabulate"
1484
+ version = "0.9.0"
1485
+ source = { registry = "https://pypi.org/simple" }
1486
+ sdist = { url = "https://files.pythonhosted.org/packages/ec/fe/802052aecb21e3797b8f7902564ab6ea0d60ff8ca23952079064155d1ae1/tabulate-0.9.0.tar.gz", hash = "sha256:0095b12bf5966de529c0feb1fa08671671b3368eec77d7ef7ab114be2c068b3c", size = 81090 }
1487
+ wheels = [
1488
+ { url = "https://files.pythonhosted.org/packages/40/44/4a5f08c96eb108af5cb50b41f76142f0afa346dfa99d5296fe7202a11854/tabulate-0.9.0-py3-none-any.whl", hash = "sha256:024ca478df22e9340661486f85298cff5f6dcdba14f3813e8830015b9ed1948f", size = 35252 },
1489
+ ]
1490
+
1491
  [[package]]
1492
  name = "tokenizers"
1493
  version = "0.21.0"