parquet-converter commited on
Commit
9529c9c
1 Parent(s): 30905e2

Update parquet files

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +0 -281
  2. dataset_infos.json +0 -1
  3. wiki_snippets.py +0 -210
  4. wikipedia_en_100_0/train/0000.parquet +3 -0
  5. wikipedia_en_100_0/train/0001.parquet +3 -0
  6. wikipedia_en_100_0/train/0002.parquet +3 -0
  7. wikipedia_en_100_0/train/0003.parquet +3 -0
  8. wikipedia_en_100_0/train/0004.parquet +3 -0
  9. wikipedia_en_100_0/train/0005.parquet +3 -0
  10. wikipedia_en_100_0/train/0006.parquet +3 -0
  11. wikipedia_en_100_0/train/0007.parquet +3 -0
  12. wikipedia_en_100_0/train/0008.parquet +3 -0
  13. wikipedia_en_100_0/train/0009.parquet +3 -0
  14. wikipedia_en_100_0/train/0010.parquet +3 -0
  15. wikipedia_en_100_0/train/0011.parquet +3 -0
  16. wikipedia_en_100_0/train/0012.parquet +3 -0
  17. wikipedia_en_100_0/train/0013.parquet +3 -0
  18. wikipedia_en_100_0/train/0014.parquet +3 -0
  19. wikipedia_en_100_0/train/0015.parquet +3 -0
  20. wikipedia_en_100_0/train/0016.parquet +3 -0
  21. wikipedia_en_100_0/train/0017.parquet +3 -0
  22. wikipedia_en_100_0/train/0018.parquet +3 -0
  23. wikipedia_en_100_0/train/0019.parquet +3 -0
  24. wikipedia_en_100_0/train/0020.parquet +3 -0
  25. wikipedia_en_100_0/train/0021.parquet +3 -0
  26. wikipedia_en_100_0/train/0022.parquet +3 -0
  27. wikipedia_en_100_0/train/0023.parquet +3 -0
  28. wikipedia_en_100_0/train/0024.parquet +3 -0
  29. wikipedia_en_100_0/train/0025.parquet +3 -0
  30. wikipedia_en_100_0/train/0026.parquet +3 -0
  31. wikipedia_en_100_0/train/0027.parquet +3 -0
  32. wikipedia_en_100_0/train/0028.parquet +3 -0
  33. wikipedia_en_100_0/train/0029.parquet +3 -0
  34. wikipedia_en_100_0/train/0030.parquet +3 -0
  35. wikipedia_en_100_0/train/0031.parquet +3 -0
  36. wikipedia_en_100_0/train/0032.parquet +3 -0
  37. wikipedia_en_100_0/train/0033.parquet +3 -0
  38. wikipedia_en_100_0/train/0034.parquet +3 -0
  39. wikipedia_en_100_0/train/0035.parquet +3 -0
  40. wikipedia_en_100_0/train/0036.parquet +3 -0
  41. wikipedia_en_100_0/train/0037.parquet +3 -0
  42. wikipedia_en_100_0/train/0038.parquet +3 -0
  43. wikipedia_en_100_0/train/0039.parquet +3 -0
  44. wikipedia_en_100_0/train/0040.parquet +3 -0
  45. wikipedia_en_100_0/train/0041.parquet +3 -0
  46. wikipedia_en_100_0/train/0042.parquet +3 -0
  47. wikipedia_en_100_0/train/0043.parquet +3 -0
  48. wikipedia_en_100_0/train/0044.parquet +3 -0
  49. wikipedia_en_100_0/train/0045.parquet +3 -0
  50. wikipedia_en_100_0/train/0046.parquet +3 -0
README.md DELETED
@@ -1,281 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - no-annotation
4
- language_creators:
5
- - crowdsourced
6
- language:
7
- - en
8
- license:
9
- - unknown
10
- multilinguality:
11
- - multilingual
12
- pretty_name: WikiSnippets
13
- size_categories:
14
- - 10M<n<100M
15
- source_datasets:
16
- - extended|wiki40b
17
- - extended|wikipedia
18
- task_categories:
19
- - text-generation
20
- - other
21
- task_ids:
22
- - language-modeling
23
- paperswithcode_id: null
24
- tags:
25
- - text-search
26
- dataset_info:
27
- - config_name: wiki40b_en_100_0
28
- features:
29
- - name: _id
30
- dtype: string
31
- - name: datasets_id
32
- dtype: int32
33
- - name: wiki_id
34
- dtype: string
35
- - name: start_paragraph
36
- dtype: int32
37
- - name: start_character
38
- dtype: int32
39
- - name: end_paragraph
40
- dtype: int32
41
- - name: end_character
42
- dtype: int32
43
- - name: article_title
44
- dtype: string
45
- - name: section_title
46
- dtype: string
47
- - name: passage_text
48
- dtype: string
49
- splits:
50
- - name: train
51
- num_bytes: 12938641686
52
- num_examples: 17553713
53
- download_size: 0
54
- dataset_size: 12938641686
55
- - config_name: wikipedia_en_100_0
56
- features:
57
- - name: _id
58
- dtype: string
59
- - name: datasets_id
60
- dtype: int32
61
- - name: wiki_id
62
- dtype: string
63
- - name: start_paragraph
64
- dtype: int32
65
- - name: start_character
66
- dtype: int32
67
- - name: end_paragraph
68
- dtype: int32
69
- - name: end_character
70
- dtype: int32
71
- - name: article_title
72
- dtype: string
73
- - name: section_title
74
- dtype: string
75
- - name: passage_text
76
- dtype: string
77
- splits:
78
- - name: train
79
- num_bytes: 26407884393
80
- num_examples: 33849898
81
- download_size: 0
82
- dataset_size: 26407884393
83
- ---
84
-
85
- # Dataset Card for "wiki_snippets"
86
-
87
- ## Table of Contents
88
- - [Dataset Description](#dataset-description)
89
- - [Dataset Summary](#dataset-summary)
90
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
91
- - [Languages](#languages)
92
- - [Dataset Structure](#dataset-structure)
93
- - [Data Instances](#data-instances)
94
- - [Data Fields](#data-fields)
95
- - [Data Splits](#data-splits)
96
- - [Dataset Creation](#dataset-creation)
97
- - [Curation Rationale](#curation-rationale)
98
- - [Source Data](#source-data)
99
- - [Annotations](#annotations)
100
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
101
- - [Considerations for Using the Data](#considerations-for-using-the-data)
102
- - [Social Impact of Dataset](#social-impact-of-dataset)
103
- - [Discussion of Biases](#discussion-of-biases)
104
- - [Other Known Limitations](#other-known-limitations)
105
- - [Additional Information](#additional-information)
106
- - [Dataset Curators](#dataset-curators)
107
- - [Licensing Information](#licensing-information)
108
- - [Citation Information](#citation-information)
109
- - [Contributions](#contributions)
110
-
111
- ## Dataset Description
112
-
113
- - **Homepage:** [https://dumps.wikimedia.org](https://dumps.wikimedia.org)
114
- - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
115
- - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
116
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
117
-
118
- ### Dataset Summary
119
-
120
- Wikipedia version split into plain text snippets for dense semantic indexing.
121
-
122
- ### Supported Tasks and Leaderboards
123
-
124
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
125
-
126
- ### Languages
127
-
128
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
129
-
130
- ## Dataset Structure
131
-
132
- We show detailed information for 2 configurations of the dataset (with 100 snippet passage length and 0 overlap) in
133
- English:
134
- - wiki40b_en_100_0: Wiki-40B
135
- - wikipedia_en_100_0: Wikipedia
136
-
137
- ### Data Instances
138
-
139
- #### wiki40b_en_100_0
140
-
141
- - **Size of downloaded dataset files:** 0.00 MB
142
- - **Size of the generated dataset:** 12339.25 MB
143
- - **Total amount of disk used:** 12339.25 MB
144
-
145
- An example of 'train' looks as follows:
146
- ```
147
- {'_id': '{"datasets_id": 0, "wiki_id": "Q1294448", "sp": 2, "sc": 0, "ep": 6, "ec": 610}',
148
- 'datasets_id': 0,
149
- 'wiki_id': 'Q1294448',
150
- 'start_paragraph': 2,
151
- 'start_character': 0,
152
- 'end_paragraph': 6,
153
- 'end_character': 610,
154
- 'article_title': 'Ági Szalóki',
155
- 'section_title': 'Life',
156
- 'passage_text': "Ági Szalóki Life She started singing as a toddler, considering Márta Sebestyén a role model. Her musical background is traditional folk music; she first won recognition for singing with Ökrös in a traditional folk style, and Besh o droM, a Balkan gypsy brass band. With these ensembles she toured around the world from the Montreal Jazz Festival, through Glastonbury Festival to the Théatre de la Ville in Paris, from New York to Beijing.\nSince 2005, she began to pursue her solo career and explore various genres, such as jazz, thirties ballads, or children's songs.\nUntil now, three of her six released albums"}
157
- ```
158
-
159
- #### wikipedia_en_100_0
160
-
161
- - **Size of downloaded dataset files:** 0.00 MB
162
- - **Size of the generated dataset:** 25184.52 MB
163
- - **Total amount of disk used:** 25184.52 MB
164
-
165
- An example of 'train' looks as follows:
166
- ```
167
- {'_id': '{"datasets_id": 0, "wiki_id": "Anarchism", "sp": 0, "sc": 0, "ep": 2, "ec": 129}',
168
- 'datasets_id': 0,
169
- 'wiki_id': 'Anarchism',
170
- 'start_paragraph': 0,
171
- 'start_character': 0,
172
- 'end_paragraph': 2,
173
- 'end_character': 129,
174
- 'article_title': 'Anarchism',
175
- 'section_title': 'Start',
176
- 'passage_text': 'Anarchism is a political philosophy and movement that is sceptical of authority and rejects all involuntary, coercive forms of hierarchy. Anarchism calls for the abolition of the state, which it holds to be unnecessary, undesirable, and harmful. As a historically left-wing movement, placed on the farthest left of the political spectrum, it is usually described alongside communalism and libertarian Marxism as the libertarian wing (libertarian socialism) of the socialist movement, and has a strong historical association with anti-capitalism and socialism. Humans lived in societies without formal hierarchies long before the establishment of formal states, realms, or empires. With the'}
177
- ```
178
-
179
- ### Data Fields
180
-
181
- The data fields are the same for all configurations:
182
- - `_id`: a `string` feature.
183
- - `datasets_id`: a `int32` feature.
184
- - `wiki_id`: a `string` feature.
185
- - `start_paragraph`: a `int32` feature.
186
- - `start_character`: a `int32` feature.
187
- - `end_paragraph`: a `int32` feature.
188
- - `end_character`: a `int32` feature.
189
- - `article_title`: a `string` feature.
190
- - `section_title`: a `string` feature.
191
- - `passage_text`: a `string` feature.
192
-
193
-
194
- ### Data Splits
195
-
196
- | name | train |
197
- |:-------------------|---------:|
198
- | wiki40b_en_100_0 | 17553713 |
199
- | wikipedia_en_100_0 | 33849898 |
200
-
201
- ## Dataset Creation
202
-
203
- ### Curation Rationale
204
-
205
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
206
-
207
- ### Source Data
208
-
209
- #### Initial Data Collection and Normalization
210
-
211
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
212
-
213
- #### Who are the source language producers?
214
-
215
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
216
-
217
- ### Annotations
218
-
219
- #### Annotation process
220
-
221
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
222
-
223
- #### Who are the annotators?
224
-
225
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
226
-
227
- ### Personal and Sensitive Information
228
-
229
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
230
-
231
- ## Considerations for Using the Data
232
-
233
- ### Social Impact of Dataset
234
-
235
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
236
-
237
- ### Discussion of Biases
238
-
239
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
240
-
241
- ### Other Known Limitations
242
-
243
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
244
-
245
- ## Additional Information
246
-
247
- ### Dataset Curators
248
-
249
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
250
-
251
- ### Licensing Information
252
-
253
- See licensing information of source datasets.
254
-
255
- ### Citation Information
256
-
257
- Cite source datasets:
258
-
259
- - Wiki-40B:
260
- ```
261
- @inproceedings{49029,
262
- title = {Wiki-40B: Multilingual Language Model Dataset},
263
- author = {Mandy Guo and Zihang Dai and Denny Vrandecic and Rami Al-Rfou},
264
- year = {2020},
265
- booktitle = {LREC 2020}
266
- }
267
- ```
268
-
269
- - Wikipedia:
270
- ```
271
- @ONLINE{wikidump,
272
- author = "Wikimedia Foundation",
273
- title = "Wikimedia Downloads",
274
- url = "https://dumps.wikimedia.org"
275
- }
276
- ```
277
-
278
-
279
- ### Contributions
280
-
281
- Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@mariamabarham](https://github.com/mariamabarham), [@yjernite](https://github.com/yjernite) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"wiki40b_en_100_0": {"description": "Wikipedia version split into plain text snippets for dense semantic indexing.\n", "citation": "@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n", "homepage": "https://dumps.wikimedia.org", "license": "", "features": {"_id": {"dtype": "string", "id": null, "_type": "Value"}, "datasets_id": {"dtype": "int32", "id": null, "_type": "Value"}, "wiki_id": {"dtype": "string", "id": null, "_type": "Value"}, "start_paragraph": {"dtype": "int32", "id": null, "_type": "Value"}, "start_character": {"dtype": "int32", "id": null, "_type": "Value"}, "end_paragraph": {"dtype": "int32", "id": null, "_type": "Value"}, "end_character": {"dtype": "int32", "id": null, "_type": "Value"}, "article_title": {"dtype": "string", "id": null, "_type": "Value"}, "section_title": {"dtype": "string", "id": null, "_type": "Value"}, "passage_text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wiki_snippets", "config_name": "wiki40b_en_100_0", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 12938641686, "num_examples": 17553713, "dataset_name": "wiki_snippets"}}, "download_checksums": {}, "download_size": 0, "post_processing_size": null, "dataset_size": 12938641686, "size_in_bytes": 12938641686}, "wikipedia_en_100_0": {"description": "Wikipedia version split into plain text snippets for dense semantic indexing.\n", "citation": "@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n", "homepage": "https://dumps.wikimedia.org", "license": "", "features": {"_id": {"dtype": "string", "id": null, "_type": "Value"}, "datasets_id": {"dtype": "int32", "id": null, "_type": "Value"}, "wiki_id": {"dtype": "string", "id": null, "_type": "Value"}, "start_paragraph": {"dtype": "int32", "id": null, "_type": "Value"}, "start_character": {"dtype": "int32", "id": null, "_type": "Value"}, "end_paragraph": {"dtype": "int32", "id": null, "_type": "Value"}, "end_character": {"dtype": "int32", "id": null, "_type": "Value"}, "article_title": {"dtype": "string", "id": null, "_type": "Value"}, "section_title": {"dtype": "string", "id": null, "_type": "Value"}, "passage_text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "wiki_snippets", "config_name": "wikipedia_en_100_0", "version": {"version_str": "2.0.0", "description": null, "major": 2, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 26407884393, "num_examples": 33849898, "dataset_name": "wiki_snippets"}}, "download_checksums": {}, "download_size": 0, "post_processing_size": null, "dataset_size": 26407884393, "size_in_bytes": 26407884393}}
 
 
wiki_snippets.py DELETED
@@ -1,210 +0,0 @@
1
- # WARNING: Please, do not use the code in this script as a template to create another script:
2
- # - It is a bad practice to use `datasets.load_dataset` inside a loading script. Please, avoid doing it.
3
-
4
- import json
5
- import math
6
-
7
- import datasets
8
-
9
-
10
- logger = datasets.logging.get_logger(__name__)
11
-
12
-
13
- _CITATION = """\
14
- @ONLINE {wikidump,
15
- author = {Wikimedia Foundation},
16
- title = {Wikimedia Downloads},
17
- url = {https://dumps.wikimedia.org}
18
- }
19
- """
20
-
21
- _DESCRIPTION = """\
22
- Wikipedia version split into plain text snippets for dense semantic indexing.
23
- """
24
-
25
- _LICENSE = (
26
- "This work is licensed under the Creative Commons Attribution-ShareAlike "
27
- "3.0 Unported License. To view a copy of this license, visit "
28
- "http://creativecommons.org/licenses/by-sa/3.0/ or send a letter to "
29
- "Creative Commons, PO Box 1866, Mountain View, CA 94042, USA."
30
- )
31
-
32
-
33
- def wiki40b_article_snippets(article, passage_len=100, overlap=0):
34
- paragraphs = article["text"].split("\n")
35
- aticle_idx = paragraphs.index("_START_ARTICLE_") + 1
36
- article_title = paragraphs[aticle_idx] if aticle_idx < len(paragraphs) else ""
37
- section_indices = [i + 1 for i, par in enumerate(paragraphs[:-1]) if par == "_START_SECTION_"]
38
- par_tabs = [par.split(" ") for par in paragraphs]
39
- word_map = [
40
- (i, len(" ".join(par[:j])), w)
41
- for i, par in enumerate(par_tabs)
42
- if not par[0].startswith("_START_")
43
- for j, w in enumerate(par)
44
- if i > 0
45
- ]
46
- step_size = passage_len - overlap
47
- passages = []
48
- for i in range(math.ceil(len(word_map) / step_size)):
49
- pre_toks = word_map[i * step_size : i * step_size + passage_len]
50
- start_section_id = max([0] + [j for j in section_indices if j <= pre_toks[0][0]])
51
- section_ids = [j for j in section_indices if j >= start_section_id and j <= pre_toks[-1][0]]
52
- section_ids = section_ids if len(section_ids) > 0 else [0]
53
- passage_text = " ".join([w for p_id, s_id, w in pre_toks])
54
- passages += [
55
- {
56
- "article_title": article_title,
57
- "section_title": " & ".join([paragraphs[j] for j in section_ids]),
58
- "wiki_id": article["wikidata_id"],
59
- "start_paragraph": pre_toks[0][0],
60
- "start_character": pre_toks[0][1],
61
- "end_paragraph": pre_toks[-1][0],
62
- "end_character": pre_toks[-1][1] + len(pre_toks[-1][2]) + 1,
63
- "passage_text": passage_text.replace("_NEWLINE_", "\n"),
64
- }
65
- ]
66
- return passages
67
-
68
-
69
- def wikipedia_article_snippets(article, passage_len=100, overlap=0):
70
- paragraphs = [par for par in article["text"].split("\n") if not par.startswith("Category:")]
71
- if "References" in paragraphs:
72
- paragraphs = paragraphs[: paragraphs.index("References")]
73
- article_title = article["title"]
74
- section_indices = [
75
- i + 1
76
- for i, par in enumerate(paragraphs[:-2])
77
- if paragraphs[i] == "" and paragraphs[i + 1] != "" and paragraphs[i + 2] != ""
78
- ]
79
- par_tabs = [par.split(" ") for par in paragraphs]
80
- word_map = [(i, len(" ".join(par[:j])), w) for i, par in enumerate(par_tabs) for j, w in enumerate(par)]
81
- step_size = passage_len - overlap
82
- passages = []
83
- for i in range(math.ceil(len(word_map) / step_size)):
84
- pre_toks = word_map[i * step_size : i * step_size + passage_len]
85
- start_section_id = max([0] + [j for j in section_indices if j <= pre_toks[0][0]])
86
- section_ids = [j for j in section_indices if start_section_id <= j <= pre_toks[-1][0]]
87
- section_ids = section_ids if len(section_ids) > 0 else [-1]
88
- passage_text = " ".join([w for p_id, s_id, w in pre_toks])
89
- passages += [
90
- {
91
- "article_title": article_title,
92
- "section_title": " & ".join(["Start" if j == -1 else paragraphs[j].strip() for j in section_ids]),
93
- "wiki_id": article_title.replace(" ", "_"),
94
- "start_paragraph": pre_toks[0][0],
95
- "start_character": pre_toks[0][1],
96
- "end_paragraph": pre_toks[-1][0],
97
- "end_character": pre_toks[-1][1] + len(pre_toks[-1][2]) + 1,
98
- "passage_text": passage_text,
99
- }
100
- ]
101
- return passages
102
-
103
-
104
- _SPLIT_FUNCTION_MAP = {
105
- "wikipedia": wikipedia_article_snippets,
106
- "wiki40b": wiki40b_article_snippets,
107
- }
108
-
109
-
110
- def generate_snippets(wikipedia, split_function, passage_len=100, overlap=0):
111
- for i, article in enumerate(wikipedia):
112
- for doc in split_function(article, passage_len, overlap):
113
- part_id = json.dumps(
114
- {
115
- "datasets_id": i,
116
- "wiki_id": doc["wiki_id"],
117
- "sp": doc["start_paragraph"],
118
- "sc": doc["start_character"],
119
- "ep": doc["end_paragraph"],
120
- "ec": doc["end_character"],
121
- }
122
- )
123
- doc["_id"] = part_id
124
- doc["datasets_id"] = i
125
- yield doc
126
-
127
-
128
- class WikiSnippetsConfig(datasets.BuilderConfig):
129
- """BuilderConfig for WikiSnippets."""
130
-
131
- def __init__(
132
- self, wikipedia_name="wiki40b", wikipedia_version_name="en", snippets_length=100, snippets_overlap=0, **kwargs
133
- ):
134
- """BuilderConfig for WikiSnippets.
135
- Args:
136
- **kwargs: keyword arguments forwarded to super.
137
- """
138
- super(WikiSnippetsConfig, self).__init__(**kwargs)
139
- self.wikipedia_name = wikipedia_name
140
- self.wikipedia_version_name = wikipedia_version_name
141
- self.snippets_length = snippets_length
142
- self.snippets_overlap = snippets_overlap
143
-
144
-
145
- class WikiSnippets(datasets.GeneratorBasedBuilder):
146
- BUILDER_CONFIG_CLASS = WikiSnippetsConfig
147
- BUILDER_CONFIGS = [
148
- WikiSnippetsConfig(
149
- name="wiki40b_en_100_0",
150
- version=datasets.Version("1.0.0"),
151
- wikipedia_name="wiki40b",
152
- wikipedia_version_name="en",
153
- snippets_length=100,
154
- snippets_overlap=0,
155
- ),
156
- WikiSnippetsConfig(
157
- name="wikipedia_en_100_0",
158
- version=datasets.Version("2.0.0"),
159
- wikipedia_name="wikipedia",
160
- wikipedia_version_name="20220301.en",
161
- snippets_length=100,
162
- snippets_overlap=0,
163
- ),
164
- ]
165
-
166
- test_dummy_data = False
167
-
168
- def _info(self):
169
- return datasets.DatasetInfo(
170
- description=_DESCRIPTION,
171
- features=datasets.Features(
172
- {
173
- "_id": datasets.Value("string"),
174
- "datasets_id": datasets.Value("int32"),
175
- "wiki_id": datasets.Value("string"),
176
- "start_paragraph": datasets.Value("int32"),
177
- "start_character": datasets.Value("int32"),
178
- "end_paragraph": datasets.Value("int32"),
179
- "end_character": datasets.Value("int32"),
180
- "article_title": datasets.Value("string"),
181
- "section_title": datasets.Value("string"),
182
- "passage_text": datasets.Value("string"),
183
- }
184
- ),
185
- supervised_keys=None,
186
- homepage="https://dumps.wikimedia.org",
187
- citation=_CITATION,
188
- )
189
-
190
- def _split_generators(self, dl_manager):
191
- # WARNING: It is a bad practice to use `datasets.load_dataset` inside a loading script. Please, avoid doing it.
192
- wikipedia = datasets.load_dataset(
193
- path=self.config.wikipedia_name,
194
- name=self.config.wikipedia_version_name,
195
- )
196
-
197
- return [
198
- datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"wikipedia": wikipedia}),
199
- ]
200
-
201
- def _generate_examples(self, wikipedia):
202
- logger.info(f"generating examples from = {self.config.wikipedia_name} {self.config.wikipedia_version_name}")
203
- for split in wikipedia:
204
- dset = wikipedia[split]
205
- split_function = _SPLIT_FUNCTION_MAP[self.config.wikipedia_name]
206
- for doc in generate_snippets(
207
- dset, split_function, passage_len=self.config.snippets_length, overlap=self.config.snippets_overlap
208
- ):
209
- id_ = doc["_id"]
210
- yield id_, doc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
wikipedia_en_100_0/train/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0042535492136786245f587a3632f9e197019b0a079456756922685af182c9a7
3
+ size 264622407
wikipedia_en_100_0/train/0001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:425d51eba211cb2bbb281f1a34d998c30921ac6123675d7efdf0331e15a17e47
3
+ size 260510417
wikipedia_en_100_0/train/0002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:425b754555b82e3784984192e3ec021b431b43d79f54221105b5078a61fe77a8
3
+ size 233033093
wikipedia_en_100_0/train/0003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:72f6b32fe5b8584e360022dba263672b0ad04ffb0eadc2ede15de86d171bed23
3
+ size 262116051
wikipedia_en_100_0/train/0004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a95c3b6e12635ec309a70df68f50fbe51b6642d47552fd3b7b268c6440ee9b39
3
+ size 262444135
wikipedia_en_100_0/train/0005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:52ce75f4f3bc3dc8c8964d1d49c16638629f5a31b2c4a4bd7b02d80c172e0177
3
+ size 259679089
wikipedia_en_100_0/train/0006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:72b56445a8db75257d0407b8bddedd2adf65b4bf014c07adc4341a087634ca8d
3
+ size 254142600
wikipedia_en_100_0/train/0007.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17d263c2f1046ab79b08da1947be684d6f26ba4bf0456227bc71e31cbc83a801
3
+ size 259540332
wikipedia_en_100_0/train/0008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:81cebc5d50ae0d3936725fc4b22a2196a356b895688962ff4c5e9fdcad8f54ac
3
+ size 259122901
wikipedia_en_100_0/train/0009.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9d279185dfaedde6b1cf69413d9c60c063a6495ef74cc4ca0646c6832736015
3
+ size 258046025
wikipedia_en_100_0/train/0010.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:be60382b2d11ff91bf3963631cf65dd2ee543c854143a0c59fa16cc543490216
3
+ size 258962391
wikipedia_en_100_0/train/0011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1053b1734d3112415a312e9077ceee59d921be281bb8ea7c55f4cad8bdf75144
3
+ size 258473962
wikipedia_en_100_0/train/0012.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8f3288680451cf7b1f2a0b6a12fe2368bba4ee4b382bfdb7a9d39a98e07ede93
3
+ size 257085912
wikipedia_en_100_0/train/0013.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:09edfdf1cebd7e0ae0b7ec50ce848362f5466196fbc5ed451ccaeefc35663ccf
3
+ size 255874729
wikipedia_en_100_0/train/0014.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1ccb189b314b61dccf27de88e4e0258c7d1c11219ab5a92220688b3750afc332
3
+ size 255289493
wikipedia_en_100_0/train/0015.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c272a3becc6e1f0e319113c6d8df248dff127b3410f13f2eaca269f47a8e688d
3
+ size 255757685
wikipedia_en_100_0/train/0016.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:295b0ac4587632ca492cb66d75a3cf8736c80a7eea545c0c84ee51f047cc6fd9
3
+ size 255981112
wikipedia_en_100_0/train/0017.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0498780557e273d9181aebded5e6488123874d047d82a27ca6a2440e70215ef
3
+ size 252417941
wikipedia_en_100_0/train/0018.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f15a231883d3aa8e23eb3028932c727f23f897258730a574e0d686c68759fad4
3
+ size 252158485
wikipedia_en_100_0/train/0019.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9879d2a8c333d653eeff14741fd8c9dd9b77e12974364b651a00443b7ff2e0eb
3
+ size 254376700
wikipedia_en_100_0/train/0020.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:851b23cd6f5c5465ac7413b86064caeb6a082f1ec136c844ba2de58e93c97171
3
+ size 253096271
wikipedia_en_100_0/train/0021.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9d0b80fc29aaa2101e2b8b45943491d350982aa79d98c99f25af94b2c5fa5571
3
+ size 252787011
wikipedia_en_100_0/train/0022.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc46945ca7fdf1fe2504a561b733e68a4040b9c00231c548ae5a91c8ce1893a6
3
+ size 244833335
wikipedia_en_100_0/train/0023.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1414fa74289f9298e1722c425d82e9c1a3de67cb2e8320a3730f6d6acd4cda03
3
+ size 244330626
wikipedia_en_100_0/train/0024.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b999d8bbbaff64ab0e3ba6ae1a54460c5037228c3f52edc7b3017fe21d3f8ee
3
+ size 245086089
wikipedia_en_100_0/train/0025.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb0fc88f17d5baddd6c631f9580f39b942c88542d93ea1aa5cb83ebd759831f9
3
+ size 241862358
wikipedia_en_100_0/train/0026.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:79c3be30de96077b24815b020cbe733ec4b94df32f456b3155859f90f056ad9c
3
+ size 243043234
wikipedia_en_100_0/train/0027.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b0acc3c9913ca90acb59f4d069393fa95222f381f25af0bc01b05e4191cba60
3
+ size 244054865
wikipedia_en_100_0/train/0028.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:528d9764f7aa292545d0b09afe6c0210a358575dc08417eaad6f1cb34f56515c
3
+ size 242557766
wikipedia_en_100_0/train/0029.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:92a26abf33b0d3159c26e9c04e8ce5ff2d59896f38aadcc210c3cbdd00dcea1e
3
+ size 242655558
wikipedia_en_100_0/train/0030.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d84ffcd9d3aa4ca68303ba7b17b6666625e833ac648f1a4bc8c9e52f9ef927d8
3
+ size 246004266
wikipedia_en_100_0/train/0031.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:09f6cac4258c4317f3bc2b638ebc85c5891a6d6d6d76b067bff0da1261b70b9e
3
+ size 246825597
wikipedia_en_100_0/train/0032.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b3ce632035b8431d5fc2da68bbf0029b6aafbabdb6c0b83459c3c1b707e2b91
3
+ size 242458107
wikipedia_en_100_0/train/0033.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b680f11e578492f7f84380c8f4e0e111f1075b3fb88b8f18a8c5ee7b0a531bf
3
+ size 243960764
wikipedia_en_100_0/train/0034.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:101b35ac44f56e87b45190c87194233bea8c1e59dadee98ddad461567b544cde
3
+ size 249587717
wikipedia_en_100_0/train/0035.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:59a6113e29f219ba6e8b264dcc2718cb40a8055a94d90fa78f75a3d96a53d27f
3
+ size 244520386
wikipedia_en_100_0/train/0036.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a731c14c9cbed09dcdbb54a03c36da387d750fe8fba8a6da6e836a6e74f2cc5
3
+ size 244517593
wikipedia_en_100_0/train/0037.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1771af433878d879cf6bd7b2d50d48722d60ee67d3fc8f84cb9da4dc4ebf173f
3
+ size 239626609
wikipedia_en_100_0/train/0038.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e604a4df47fbd34de3f5124d99abed4a10a5c3d93951a7160c918fbc7c7224ca
3
+ size 245407784
wikipedia_en_100_0/train/0039.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:227d9a130e6d673bbe6e2024eca61c07cc81e1fc3e38c1165a6a4f7cc02809d4
3
+ size 233381568
wikipedia_en_100_0/train/0040.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:32341497a0e8c914c093227de37baf092b13aae2d60b682c2add024a51dba4cd
3
+ size 240678334
wikipedia_en_100_0/train/0041.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4aa4ec697f8e6821075bff9701ebce5b832158a8803859ba2a70a9d949bf065d
3
+ size 237807552
wikipedia_en_100_0/train/0042.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8eb878b7a75bba31468656a8a7541f1ef750e338058164e2380562b3a6accf88
3
+ size 242738078
wikipedia_en_100_0/train/0043.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b3098e24e4f13ae80b3655da79ad8be94f6e835573a81b430277622dc90180f1
3
+ size 243314892
wikipedia_en_100_0/train/0044.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab2ee27b25e445ebf6722ff037c98d47eb97ec42b92f0f1831e2f92db0844ec8
3
+ size 242084161
wikipedia_en_100_0/train/0045.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e819bd8bee93630416b3d4a7b69f40847751b3306bbfea79291aea628f8450d4
3
+ size 240865821
wikipedia_en_100_0/train/0046.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:08dca3cb91e5aabd171df9ac5ad0a355daee4f0045fd76dd9b6f2ec93fbc758a
3
+ size 240994833