Datasets:

Modalities:
Text
Formats:
parquet
Sub-tasks:
extractive-qa
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
parquet-converter commited on
Commit
a6708a2
·
1 Parent(s): e117573

Update parquet files

Browse files
README.md DELETED
@@ -1,350 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - found
4
- language_creators:
5
- - found
6
- language:
7
- - en
8
- license:
9
- - unknown
10
- multilinguality:
11
- - monolingual
12
- size_categories:
13
- - 100K<n<1M
14
- source_datasets:
15
- - extended|drop
16
- - extended|hotpot_qa
17
- - extended|natural_questions
18
- - extended|race
19
- - extended|search_qa
20
- - extended|squad
21
- - extended|trivia_qa
22
- task_categories:
23
- - question-answering
24
- task_ids:
25
- - extractive-qa
26
- paperswithcode_id: mrqa-2019
27
- pretty_name: MRQA 2019
28
- dataset_info:
29
- features:
30
- - name: subset
31
- dtype: string
32
- - name: context
33
- dtype: string
34
- - name: context_tokens
35
- sequence:
36
- - name: tokens
37
- dtype: string
38
- - name: offsets
39
- dtype: int32
40
- - name: qid
41
- dtype: string
42
- - name: question
43
- dtype: string
44
- - name: question_tokens
45
- sequence:
46
- - name: tokens
47
- dtype: string
48
- - name: offsets
49
- dtype: int32
50
- - name: detected_answers
51
- sequence:
52
- - name: text
53
- dtype: string
54
- - name: char_spans
55
- sequence:
56
- - name: start
57
- dtype: int32
58
- - name: end
59
- dtype: int32
60
- - name: token_spans
61
- sequence:
62
- - name: start
63
- dtype: int32
64
- - name: end
65
- dtype: int32
66
- - name: answers
67
- sequence: string
68
- config_name: plain_text
69
- splits:
70
- - name: train
71
- num_bytes: 4090681873
72
- num_examples: 516819
73
- - name: test
74
- num_bytes: 57712177
75
- num_examples: 9633
76
- - name: validation
77
- num_bytes: 484107026
78
- num_examples: 58221
79
- download_size: 1479518355
80
- dataset_size: 4632501076
81
- ---
82
-
83
- # Dataset Card for MRQA 2019
84
-
85
- ## Table of Contents
86
- - [Dataset Description](#dataset-description)
87
- - [Dataset Summary](#dataset-summary)
88
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
89
- - [Languages](#languages)
90
- - [Dataset Structure](#dataset-structure)
91
- - [Data Instances](#data-instances)
92
- - [Data Fields](#data-fields)
93
- - [Data Splits](#data-splits)
94
- - [Dataset Creation](#dataset-creation)
95
- - [Curation Rationale](#curation-rationale)
96
- - [Source Data](#source-data)
97
- - [Annotations](#annotations)
98
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
99
- - [Considerations for Using the Data](#considerations-for-using-the-data)
100
- - [Social Impact of Dataset](#social-impact-of-dataset)
101
- - [Discussion of Biases](#discussion-of-biases)
102
- - [Other Known Limitations](#other-known-limitations)
103
- - [Additional Information](#additional-information)
104
- - [Dataset Curators](#dataset-curators)
105
- - [Licensing Information](#licensing-information)
106
- - [Citation Information](#citation-information)
107
- - [Contributions](#contributions)
108
-
109
- ## Dataset Description
110
-
111
- - **Homepage:** [MRQA 2019 Shared Task](https://mrqa.github.io/2019/shared.html)
112
- - **Repository:** [MRQA 2019 Github repository](https://github.com/mrqa/MRQA-Shared-Task-2019)
113
- - **Paper:** [MRQA 2019 Shared Task: Evaluating Generalization in Reading Comprehension
114
- ](https://arxiv.org/abs/1910.09753)
115
- - **Leaderboard:** [Shared task](https://mrqa.github.io/2019/shared.html)
116
- - **Point of Contact:** [[email protected]]([email protected])
117
-
118
- ### Dataset Summary
119
-
120
- The MRQA 2019 Shared Task focuses on generalization in question answering. An effective question answering system should do more than merely interpolate from the training set to answer test examples drawn from the same distribution: it should also be able to extrapolate to out-of-distribution examples — a significantly harder challenge.
121
-
122
- The dataset is a collection of 18 existing QA dataset (carefully selected subset of them) and converted to the same format (SQuAD format). Among these 18 datasets, six datasets were made available for training, six datasets were made available for development, and the final six for testing. The dataset is released as part of the MRQA 2019 Shared Task.
123
-
124
- ### Supported Tasks and Leaderboards
125
-
126
- From the official repository:
127
-
128
- *The format of the task is extractive question answering. Given a question and context passage, systems must find the word or phrase in the document that best answers the question. While this format is somewhat restrictive, it allows us to leverage many existing datasets, and its simplicity helps us focus on out-of-domain generalization, instead of other important but orthogonal challenges.*
129
-
130
- *We have adapted several existing datasets from their original formats and settings to conform to our unified extractive setting. Most notably:*
131
- - *We provide only a single, length-limited context.*
132
- - *There are no unanswerable or non-span answer questions.*
133
- - *All questions have at least one accepted answer that is found exactly in the context.*
134
-
135
- *A span is judged to be an exact match if it matches the answer string after performing normalization consistent with the SQuAD dataset. Specifically:*
136
- - *The text is uncased.*
137
- - *All punctuation is stripped.*
138
- - *All articles `{a, an, the}` are removed.*
139
- - *All consecutive whitespace markers are compressed to just a single normal space `' '`.*
140
-
141
- Answers are evaluated using exact match and token-level F1 metrics. One can refer to the [mrqa_official_eval.py](https://github.com/mrqa/MRQA-Shared-Task-2019/blob/master/mrqa_official_eval.py) for evaluation.
142
-
143
- ### Languages
144
-
145
- The text in the dataset is in English. The associated BCP-47 code is `en`.
146
-
147
- ## Dataset Structure
148
-
149
- ### Data Instances
150
-
151
- An examples looks like this:
152
- ```
153
- {
154
- 'qid': 'f43c83e38d1e424ea00f8ad3c77ec999',
155
- 'subset': 'SQuAD'
156
-
157
- 'context': 'CBS broadcast Super Bowl 50 in the U.S., and charged an average of $5 million for a 30-second commercial during the game. The Super Bowl 50 halftime show was headlined by the British rock group Coldplay with special guest performers Beyoncé and Bruno Mars, who headlined the Super Bowl XLVII and Super Bowl XLVIII halftime shows, respectively. It was the third-most watched U.S. broadcast ever.',
158
- 'context_tokens': {
159
- 'offsets': [0, 4, 14, 20, 25, 28, 31, 35, 39, 41, 45, 53, 56, 64, 67, 68, 70, 78, 82, 84, 94, 105, 112, 116, 120, 122, 126, 132, 137, 140, 149, 154, 158, 168, 171, 175, 183, 188, 194, 203, 208, 216, 222, 233, 241, 245, 251, 255, 257, 261, 271, 275, 281, 286, 292, 296, 302, 307, 314, 323, 328, 330, 342, 344, 347, 351, 355, 360, 361, 366, 374, 379, 389, 393],
160
- 'tokens': ['CBS', 'broadcast', 'Super', 'Bowl', '50', 'in', 'the', 'U.S.', ',', 'and', 'charged', 'an', 'average', 'of', '$', '5', 'million', 'for', 'a', '30-second', 'commercial', 'during', 'the', 'game', '.', 'The', 'Super', 'Bowl', '50', 'halftime', 'show', 'was', 'headlined', 'by', 'the', 'British', 'rock', 'group', 'Coldplay', 'with', 'special', 'guest', 'performers', 'Beyoncé', 'and', 'Bruno', 'Mars', ',', 'who', 'headlined', 'the', 'Super', 'Bowl', 'XLVII', 'and', 'Super', 'Bowl', 'XLVIII', 'halftime', 'shows', ',', 'respectively', '.', 'It', 'was', 'the', 'third', '-', 'most', 'watched', 'U.S.', 'broadcast', 'ever', '.']
161
- },
162
-
163
- 'question': "Who was the main performer at this year's halftime show?",
164
- 'question_tokens': {
165
- 'offsets': [0, 4, 8, 12, 17, 27, 30, 35, 39, 42, 51, 55],
166
- 'tokens': ['Who', 'was', 'the', 'main', 'performer', 'at', 'this', 'year', "'s", 'halftime', 'show', '?']
167
- },
168
-
169
- 'detected_answers': {
170
- 'char_spans': [
171
- {
172
- 'end': [201],
173
- 'start': [194]
174
- }, {
175
- 'end': [201],
176
- 'start': [194]
177
- }, {
178
- 'end': [201],
179
- 'start': [194]
180
- }
181
- ],
182
- 'text': ['Coldplay', 'Coldplay', 'Coldplay'],
183
- 'token_spans': [
184
- {
185
- 'end': [38],
186
- 'start': [38]
187
- }, {
188
- 'end': [38],
189
- 'start': [38]
190
- }, {
191
- 'end': [38],
192
- 'start': [38]
193
- }
194
- ]
195
- },
196
-
197
- 'answers': ['Coldplay', 'Coldplay', 'Coldplay'],
198
- }
199
- ```
200
-
201
- ### Data Fields
202
-
203
- - `subset`: which of the dataset does this examples come from?
204
- - `context`: This is the raw text of the supporting passage. Three special token types have been inserted: `[TLE]` precedes document titles, `[DOC]` denotes document breaks, and `[PAR]` denotes paragraph breaks. The maximum length of the context is 800 tokens.
205
- - `context_tokens`: A tokenized version of the supporting passage, using spaCy. Each token is a tuple of the token string and token character offset. The maximum number of tokens is 800.
206
- - `tokens`: list of tokens.
207
- - `offets`: list of offsets.
208
- - `qas`: A list of questions for the given context.
209
- - `qid`: A unique identifier for the question. The `qid` is unique across all datasets.
210
- - `question`: The raw text of the question.
211
- - `question_tokens`: A tokenized version of the question. The tokenizer and token format is the same as for the context.
212
- - `tokens`: list of tokens.
213
- - `offets`: list of offsets.
214
- - `detected_answers`: A list of answer spans for the given question that index into the context. For some datasets these spans have been automatically detected using searching heuristics. The same answer may appear multiple times in the text --- each of these occurrences is recorded. For example, if `42` is the answer, the context `"The answer is 42. 42 is the answer."`, has two occurrences marked.
215
- - `text`: The raw text of the detected answer.
216
- - `char_spans`: Inclusive (start, end) character spans (indexing into the raw context).
217
- - `start`: start (single element)
218
- - `end`: end (single element)
219
- - `token_spans`: Inclusive (start, end) token spans (indexing into the tokenized context).
220
- - `start`: start (single element)
221
- - `end`: end (single element)
222
-
223
-
224
-
225
- ### Data Splits
226
-
227
- **Training data**
228
- | Dataset | Number of Examples |
229
- | :-----: | :------: |
230
- | [SQuAD](https://arxiv.org/abs/1606.05250) | 86,588 |
231
- | [NewsQA](https://arxiv.org/abs/1611.09830) | 74,160 |
232
- | [TriviaQA](https://arxiv.org/abs/1705.03551)| 61,688 |
233
- | [SearchQA](https://arxiv.org/abs/1704.05179)| 117,384 |
234
- | [HotpotQA](https://arxiv.org/abs/1809.09600)| 72,928 |
235
- | [NaturalQuestions](https://ai.google/research/pubs/pub47761)| 104,071 |
236
-
237
- **Development data**
238
-
239
- This in-domain data may be used for helping develop models.
240
-
241
- | Dataset | Examples |
242
- | :-----: | :------: |
243
- | [SQuAD](https://arxiv.org/abs/1606.05250) | 10,507 |
244
- | [NewsQA](https://arxiv.org/abs/1611.09830) | 4,212 |
245
- | [TriviaQA](https://arxiv.org/abs/1705.03551)| 7,785|
246
- | [SearchQA](https://arxiv.org/abs/1704.05179)| 16,980 |
247
- | [HotpotQA](https://arxiv.org/abs/1809.09600)| 5,904 |
248
- | [NaturalQuestions](https://ai.google/research/pubs/pub47761)| 12,836 |
249
-
250
- **Test data**
251
-
252
- The final testing data only contain out-of-domain data.
253
-
254
- | Dataset | Examples |
255
- | :-----: | :------: |
256
- | [BioASQ](http://bioasq.org/) | 1,504 |
257
- | [DROP](https://arxiv.org/abs/1903.00161) | 1,503 |
258
- | [DuoRC](https://arxiv.org/abs/1804.07927)| 1,501 |
259
- | [RACE](https://arxiv.org/abs/1704.04683) | 674 |
260
- | [RelationExtraction](https://arxiv.org/abs/1706.04115) | 2,948|
261
- | [TextbookQA](http://ai2-website.s3.amazonaws.com/publications/CVPR17_TQA.pdf)| 1,503 |
262
-
263
-
264
-
265
- From the official repository:
266
-
267
- ***Note:** As previously mentioned, the out-of-domain dataset have been modified from their original settings to fit the unified MRQA Shared Task paradigm. At a high level, the following two major modifications have been made:*
268
-
269
- *1. All QA-context pairs are extractive. That is, the answer is selected from the context and not via, e.g., multiple-choice.*
270
- *2. All contexts are capped at a maximum of `800` tokens. As a result, for longer contexts like Wikipedia articles, we only consider examples where the answer appears in the first `800` tokens.*
271
-
272
- *As a result, some splits are harder than the original datasets (e.g., removal of multiple-choice in RACE), while some are easier (e.g., restricted context length in NaturalQuestions --- we use the short answer selection). Thus one should expect different performance ranges if comparing to previous work on these datasets.*
273
-
274
- ## Dataset Creation
275
-
276
- ### Curation Rationale
277
-
278
- From the official repository:
279
-
280
- *Both train and test datasets have the same format described above, but may differ in some of the following ways:*
281
- - *Passage distribution: Test examples may involve passages from different sources (e.g., science, news, novels, medical abstracts, etc) with pronounced syntactic and lexical differences.*
282
- - *Question distribution: Test examples may emphasize different styles of questions (e.g., entity-centric, relational, other tasks reformulated as QA, etc) which may come from different sources (e.g., crowdworkers, domain experts, exam writers, etc.)*
283
- - *Joint distribution: Test examples may vary according to the relationship of the question to the passage (e.g., collected independent vs. dependent of evidence, multi-hop, etc)*
284
-
285
- ### Source Data
286
-
287
- [More Information Needed]
288
-
289
- #### Initial Data Collection and Normalization
290
-
291
- [More Information Needed]
292
-
293
- #### Who are the source language producers?
294
-
295
- [More Information Needed]
296
-
297
- ### Annotations
298
-
299
- [More Information Needed]
300
-
301
- #### Annotation process
302
-
303
- [More Information Needed]
304
-
305
- #### Who are the annotators?
306
-
307
- [More Information Needed]
308
-
309
- ### Personal and Sensitive Information
310
-
311
- [More Information Needed]
312
-
313
- ## Considerations for Using the Data
314
-
315
- ### Social Impact of Dataset
316
-
317
- [More Information Needed]
318
-
319
- ### Discussion of Biases
320
-
321
- [More Information Needed]
322
-
323
- ### Other Known Limitations
324
-
325
- [More Information Needed]
326
-
327
- ## Additional Information
328
-
329
- ### Dataset Curators
330
-
331
- [More Information Needed]
332
-
333
- ### Licensing Information
334
-
335
- Unknown
336
-
337
- ### Citation Information
338
-
339
- ```
340
- @inproceedings{fisch2019mrqa,
341
- title={{MRQA} 2019 Shared Task: Evaluating Generalization in Reading Comprehension},
342
- author={Adam Fisch and Alon Talmor and Robin Jia and Minjoon Seo and Eunsol Choi and Danqi Chen},
343
- booktitle={Proceedings of 2nd Machine Reading for Reading Comprehension (MRQA) Workshop at EMNLP},
344
- year={2019},
345
- }
346
- ```
347
-
348
- ### Contributions
349
-
350
- Thanks to [@jimmycode](https://github.com/jimmycode), [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"plain_text": {"description": "The MRQA 2019 Shared Task focuses on generalization in question answering.\nAn effective question answering system should do more than merely\ninterpolate from the training set to answer test examples drawn\nfrom the same distribution: it should also be able to extrapolate\nto out-of-distribution examples \u2014 a significantly harder challenge.\n\nThe dataset is a collection of 18 existing QA dataset (carefully selected\nsubset of them) and converted to the same format (SQuAD format). Among\nthese 18 datasets, six datasets were made available for training,\nsix datasets were made available for development, and the final six\nfor testing. The dataset is released as part of the MRQA 2019 Shared Task.\n", "citation": "@inproceedings{fisch2019mrqa,\n title={{MRQA} 2019 Shared Task: Evaluating Generalization in Reading Comprehension},\n author={Adam Fisch and Alon Talmor and Robin Jia and Minjoon Seo and Eunsol Choi and Danqi Chen},\n booktitle={Proceedings of 2nd Machine Reading for Reading Comprehension (MRQA) Workshop at EMNLP},\n year={2019},\n}\n", "homepage": "https://mrqa.github.io/2019/shared.html", "license": "Unknwon", "features": {"subset": {"dtype": "string", "id": null, "_type": "Value"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "context_tokens": {"feature": {"tokens": {"dtype": "string", "id": null, "_type": "Value"}, "offsets": {"dtype": "int32", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "qid": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "question_tokens": {"feature": {"tokens": {"dtype": "string", "id": null, "_type": "Value"}, "offsets": {"dtype": "int32", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "detected_answers": {"feature": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "char_spans": {"feature": {"start": {"dtype": "int32", "id": null, "_type": "Value"}, "end": {"dtype": "int32", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "token_spans": {"feature": {"start": {"dtype": "int32", "id": null, "_type": "Value"}, "end": {"dtype": "int32", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "answers": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "mrqa", "config_name": "plain_text", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 4090681873, "num_examples": 516819, "dataset_name": "mrqa"}, "test": {"name": "test", "num_bytes": 57712177, "num_examples": 9633, "dataset_name": "mrqa"}, "validation": {"name": "validation", "num_bytes": 484107026, "num_examples": 58221, "dataset_name": "mrqa"}}, "download_checksums": {"https://s3.us-east-2.amazonaws.com/mrqa/release/v2/train/SQuAD.jsonl.gz": {"num_bytes": 27621835, "checksum": "b094703b9c6f740cc2dfd70b3201b833553fcec0c8a522f22c2c6ff82ce2cc78"}, "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/train/NewsQA.jsonl.gz": {"num_bytes": 56451248, "checksum": "f1ccbf2d259ce1094aacde21a53592894248e5778814205dac94f0b086dbe968"}, "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/train/TriviaQA-web.jsonl.gz": {"num_bytes": 356784923, "checksum": "61fad6884370408282ad3ed0b5f25a9e932d9a724b6929ea03ea5344ff0cd3f7"}, "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/train/SearchQA.jsonl.gz": {"num_bytes": 641332495, "checksum": "32cda932667b7b65ab3079a8271d4e5726b4b989d0b862b25c77eb03a661b609"}, "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/train/HotpotQA.jsonl.gz": {"num_bytes": 107394872, "checksum": "3a94712c073dc9f29d88ac149faa01ef9c7c089f97ee25d9cbac39387550825d"}, "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/train/NaturalQuestionsShort.jsonl.gz": {"num_bytes": 116612493, "checksum": "6cdac324664b94b60be3203a077bf361d0bfa68a17af9b71def1186a6958a68c"}, "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/dev/SQuAD.jsonl.gz": {"num_bytes": 3474262, "checksum": "5afa4b088adf297fc29374ddf2d44d974b8837380e2554e62edf258fee5c32ee"}, "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/dev/NewsQA.jsonl.gz": {"num_bytes": 3142984, "checksum": "66bfb10cab2029bbc7d1afaece20c35fac341b1c179d15b70fde22a207f096ae"}, "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/dev/TriviaQA-web.jsonl.gz": {"num_bytes": 44971198, "checksum": "faf8add436de5a5fa81071a4e7190850d7e9a20acc811439e8a127ba8ec25640"}, "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/dev/SearchQA.jsonl.gz": {"num_bytes": 92526612, "checksum": "c84d2cc02cac5aa9d576ce1cd22900e9d75fe8a37bc795901c36cae6ef9e5ff0"}, "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/dev/HotpotQA.jsonl.gz": {"num_bytes": 10029807, "checksum": "43bb9291525d8b59229ba327b67cca42f0a9c23798c455f6fbe813e9979cca84"}, "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/dev/NaturalQuestionsShort.jsonl.gz": {"num_bytes": 10424248, "checksum": "2ba8b2181b520f81b49d62c0e4a23819f33d5dec0e8cf4a623edcda0feb73530"}, "http://participants-area.bioasq.org/MRQA2019/": {"num_bytes": 2666134, "checksum": "d8f237baea33bd0f4a664ef37ccd893cc682fd9458383dc1d1b8eb4685bb9efc"}, "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/dev/DROP.jsonl.gz": {"num_bytes": 592127, "checksum": "3f7b6b8131cd523d4451e98cf24adc53a92519763597261d28ae83f3920849ab"}, "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/dev/DuoRC.ParaphraseRC.jsonl.gz": {"num_bytes": 1197881, "checksum": "aeb8b9a31044be2ba3d62a456d61b2d447ff76dabe6fa77260b6efed0fb4c010"}, "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/dev/RACE.jsonl.gz": {"num_bytes": 1563018, "checksum": "c620ca043c78504ea02d1cef494207c6c76a5e5dedd7976f5fed5eb9724864b8"}, "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/dev/RelationExtraction.jsonl.gz": {"num_bytes": 850817, "checksum": "845668398356208246605fa1f363de63b45848c946d56514edcc8d00d12530ea"}, "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/dev/TextbookQA.jsonl.gz": {"num_bytes": 1881401, "checksum": "1e861f197e739ead1947c60fa0917a02205dd48a559502194d7085ccd8608b64"}}, "download_size": 1479518355, "post_processing_size": null, "dataset_size": 4632501076, "size_in_bytes": 6112019431}}
 
 
mrqa.py DELETED
@@ -1,196 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- """MRQA 2019 Shared task dataset."""
16
-
17
-
18
- import json
19
-
20
- import datasets
21
-
22
-
23
- _CITATION = """\
24
- @inproceedings{fisch2019mrqa,
25
- title={{MRQA} 2019 Shared Task: Evaluating Generalization in Reading Comprehension},
26
- author={Adam Fisch and Alon Talmor and Robin Jia and Minjoon Seo and Eunsol Choi and Danqi Chen},
27
- booktitle={Proceedings of 2nd Machine Reading for Reading Comprehension (MRQA) Workshop at EMNLP},
28
- year={2019},
29
- }
30
- """
31
-
32
- _DESCRIPTION = """\
33
- The MRQA 2019 Shared Task focuses on generalization in question answering.
34
- An effective question answering system should do more than merely
35
- interpolate from the training set to answer test examples drawn
36
- from the same distribution: it should also be able to extrapolate
37
- to out-of-distribution examples — a significantly harder challenge.
38
-
39
- The dataset is a collection of 18 existing QA dataset (carefully selected
40
- subset of them) and converted to the same format (SQuAD format). Among
41
- these 18 datasets, six datasets were made available for training,
42
- six datasets were made available for development, and the final six
43
- for testing. The dataset is released as part of the MRQA 2019 Shared Task.
44
- """
45
-
46
- _HOMEPAGE = "https://mrqa.github.io/2019/shared.html"
47
-
48
- _LICENSE = "Unknwon"
49
-
50
- _URLs = {
51
- # Train sub-datasets
52
- "train+SQuAD": "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/train/SQuAD.jsonl.gz",
53
- "train+NewsQA": "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/train/NewsQA.jsonl.gz",
54
- "train+TriviaQA": "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/train/TriviaQA-web.jsonl.gz",
55
- "train+SearchQA": "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/train/SearchQA.jsonl.gz",
56
- "train+HotpotQA": "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/train/HotpotQA.jsonl.gz",
57
- "train+NaturalQuestions": "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/train/NaturalQuestionsShort.jsonl.gz",
58
- # Validation sub-datasets
59
- "validation+SQuAD": "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/dev/SQuAD.jsonl.gz",
60
- "validation+NewsQA": "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/dev/NewsQA.jsonl.gz",
61
- "validation+TriviaQA": "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/dev/TriviaQA-web.jsonl.gz",
62
- "validation+SearchQA": "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/dev/SearchQA.jsonl.gz",
63
- "validation+HotpotQA": "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/dev/HotpotQA.jsonl.gz",
64
- "validation+NaturalQuestions": "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/dev/NaturalQuestionsShort.jsonl.gz",
65
- # Test sub-datasets
66
- "test+BioASQ": "http://participants-area.bioasq.org/MRQA2019/", # BioASQ.jsonl.gz
67
- "test+DROP": "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/dev/DROP.jsonl.gz",
68
- "test+DuoRC": "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/dev/DuoRC.ParaphraseRC.jsonl.gz",
69
- "test+RACE": "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/dev/RACE.jsonl.gz",
70
- "test+RelationExtraction": "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/dev/RelationExtraction.jsonl.gz",
71
- "test+TextbookQA": "https://s3.us-east-2.amazonaws.com/mrqa/release/v2/dev/TextbookQA.jsonl.gz",
72
- }
73
-
74
-
75
- class Mrqa(datasets.GeneratorBasedBuilder):
76
- """MRQA 2019 Shared task dataset."""
77
-
78
- VERSION = datasets.Version("1.1.0")
79
-
80
- BUILDER_CONFIGS = [
81
- datasets.BuilderConfig(name="plain_text", description="Plain text", version=VERSION),
82
- ]
83
-
84
- def _info(self):
85
- return datasets.DatasetInfo(
86
- description=_DESCRIPTION,
87
- # Format is derived from https://github.com/mrqa/MRQA-Shared-Task-2019#mrqa-format
88
- features=datasets.Features(
89
- {
90
- "subset": datasets.Value("string"),
91
- "context": datasets.Value("string"),
92
- "context_tokens": datasets.Sequence(
93
- {
94
- "tokens": datasets.Value("string"),
95
- "offsets": datasets.Value("int32"),
96
- }
97
- ),
98
- "qid": datasets.Value("string"),
99
- "question": datasets.Value("string"),
100
- "question_tokens": datasets.Sequence(
101
- {
102
- "tokens": datasets.Value("string"),
103
- "offsets": datasets.Value("int32"),
104
- }
105
- ),
106
- "detected_answers": datasets.Sequence(
107
- {
108
- "text": datasets.Value("string"),
109
- "char_spans": datasets.Sequence(
110
- {
111
- "start": datasets.Value("int32"),
112
- "end": datasets.Value("int32"),
113
- }
114
- ),
115
- "token_spans": datasets.Sequence(
116
- {
117
- "start": datasets.Value("int32"),
118
- "end": datasets.Value("int32"),
119
- }
120
- ),
121
- }
122
- ),
123
- "answers": datasets.Sequence(datasets.Value("string")),
124
- }
125
- ),
126
- supervised_keys=None,
127
- homepage=_HOMEPAGE,
128
- license=_LICENSE,
129
- citation=_CITATION,
130
- )
131
-
132
- def _split_generators(self, dl_manager):
133
- """Returns SplitGenerators."""
134
- data_dir = dl_manager.download_and_extract(_URLs)
135
-
136
- return [
137
- datasets.SplitGenerator(
138
- name=datasets.Split.TRAIN,
139
- gen_kwargs={
140
- "filepaths_dict": data_dir,
141
- "split": "train",
142
- },
143
- ),
144
- datasets.SplitGenerator(
145
- name=datasets.Split.TEST,
146
- gen_kwargs={
147
- "filepaths_dict": data_dir,
148
- "split": "test",
149
- },
150
- ),
151
- datasets.SplitGenerator(
152
- name=datasets.Split.VALIDATION,
153
- gen_kwargs={
154
- "filepaths_dict": data_dir,
155
- "split": "validation",
156
- },
157
- ),
158
- ]
159
-
160
- def _generate_examples(self, filepaths_dict, split):
161
- """Yields examples."""
162
- for source, filepath in filepaths_dict.items():
163
- if split not in source:
164
- continue
165
- with open(filepath, encoding="utf-8") as f:
166
- header = next(f)
167
- subset = json.loads(header)["header"]["dataset"]
168
-
169
- for row in f:
170
- paragraph = json.loads(row)
171
- context = paragraph["context"].strip()
172
- context_tokens = [{"tokens": t[0], "offsets": t[1]} for t in paragraph["context_tokens"]]
173
- for qa in paragraph["qas"]:
174
- qid = qa["qid"]
175
- question = qa["question"].strip()
176
- question_tokens = [{"tokens": t[0], "offsets": t[1]} for t in qa["question_tokens"]]
177
- detected_answers = []
178
- for detect_ans in qa["detected_answers"]:
179
- detected_answers.append(
180
- {
181
- "text": detect_ans["text"].strip(),
182
- "char_spans": [{"start": t[0], "end": t[1]} for t in detect_ans["char_spans"]],
183
- "token_spans": [{"start": t[0], "end": t[1]} for t in detect_ans["token_spans"]],
184
- }
185
- )
186
- answers = qa["answers"]
187
- yield f"{source}_{qid}", {
188
- "subset": subset,
189
- "context": context,
190
- "context_tokens": context_tokens,
191
- "qid": qid,
192
- "question": question,
193
- "question_tokens": question_tokens,
194
- "detected_answers": detected_answers,
195
- "answers": answers,
196
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
plain_text/mrqa-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:be1697aab16c376b78f9d3b0d02330cbc2da6451357dca37e58200afcff71837
3
+ size 15483097
plain_text/mrqa-train-00000-of-00009.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28d6d0bf627ed5eaba5204faf0615f55d225f0b60245e7525441f3cd7083fe2b
3
+ size 115781028
plain_text/mrqa-train-00001-of-00009.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4420fa6562cd57c5cd7fc0ca36246bfb480472de0f8d534b8f2a9bbaab84c59a
3
+ size 106226427
plain_text/mrqa-train-00002-of-00009.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf3518e80b0ff9f5ba3155b747e47d21c3201c70090528294f7af8b2784d5724
3
+ size 202356638
plain_text/mrqa-train-00003-of-00009.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:78c49ffcb7cc7be757b121b7599f1af124ea24068b16d0df72f3930171d28e14
3
+ size 205250930
plain_text/mrqa-train-00004-of-00009.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95a92bbdc8ca3bfbd701bc643c7954974c5d069d207ea30334196645f99a6f2b
3
+ size 201615529
plain_text/mrqa-train-00005-of-00009.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:39968e54c41e2b047e4a9993be8ae9d1e3c466f5b24966cd4ff0f0c32c5f4d08
3
+ size 201844010
plain_text/mrqa-train-00006-of-00009.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c4752955c7a43f0867025f0d5d95380b2cbfb8a05bc0e900849447f7088ee9e
3
+ size 201938055
plain_text/mrqa-train-00007-of-00009.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a24dfa05d9f0ed8630356595f00b31efdfeea2e43cf8b91d45a3f73980ba6161
3
+ size 218428819
plain_text/mrqa-train-00008-of-00009.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:50d788b3ca661a9142368d3f63d625c537ff70d4b37f8b0a0ab1446766fd9323
3
+ size 30638073
plain_text/mrqa-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ee89ed57e2257f6d46a21d1e71cc4caa5a6bc25d91691e2c870810b8a71bd5b3
3
+ size 177544050