Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
parquet-converter commited on
Commit
a563636
·
1 Parent(s): 340c95d

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,225 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - crowdsourced
4
- language:
5
- - en
6
- language_creators:
7
- - found
8
- license:
9
- - cc-by-4.0
10
- multilinguality:
11
- - monolingual
12
- pretty_name: Question Answering via Sentence Composition (QASC)
13
- size_categories:
14
- - 1K<n<10K
15
- source_datasets:
16
- - original
17
- task_categories:
18
- - question-answering
19
- - multiple-choice
20
- task_ids:
21
- - extractive-qa
22
- - multiple-choice-qa
23
- paperswithcode_id: qasc
24
- dataset_info:
25
- features:
26
- - name: id
27
- dtype: string
28
- - name: question
29
- dtype: string
30
- - name: choices
31
- sequence:
32
- - name: text
33
- dtype: string
34
- - name: label
35
- dtype: string
36
- - name: answerKey
37
- dtype: string
38
- - name: fact1
39
- dtype: string
40
- - name: fact2
41
- dtype: string
42
- - name: combinedfact
43
- dtype: string
44
- - name: formatted_question
45
- dtype: string
46
- splits:
47
- - name: test
48
- num_bytes: 393683
49
- num_examples: 920
50
- - name: train
51
- num_bytes: 4919377
52
- num_examples: 8134
53
- - name: validation
54
- num_bytes: 562352
55
- num_examples: 926
56
- download_size: 1616514
57
- dataset_size: 5875412
58
- ---
59
-
60
- # Dataset Card for "qasc"
61
-
62
- ## Table of Contents
63
- - [Dataset Description](#dataset-description)
64
- - [Dataset Summary](#dataset-summary)
65
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
66
- - [Languages](#languages)
67
- - [Dataset Structure](#dataset-structure)
68
- - [Data Instances](#data-instances)
69
- - [Data Fields](#data-fields)
70
- - [Data Splits](#data-splits)
71
- - [Dataset Creation](#dataset-creation)
72
- - [Curation Rationale](#curation-rationale)
73
- - [Source Data](#source-data)
74
- - [Annotations](#annotations)
75
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
76
- - [Considerations for Using the Data](#considerations-for-using-the-data)
77
- - [Social Impact of Dataset](#social-impact-of-dataset)
78
- - [Discussion of Biases](#discussion-of-biases)
79
- - [Other Known Limitations](#other-known-limitations)
80
- - [Additional Information](#additional-information)
81
- - [Dataset Curators](#dataset-curators)
82
- - [Licensing Information](#licensing-information)
83
- - [Citation Information](#citation-information)
84
- - [Contributions](#contributions)
85
-
86
- ## Dataset Description
87
-
88
- - **Homepage:** [https://allenai.org/data/qasc](https://allenai.org/data/qasc)
89
- - **Repository:** https://github.com/allenai/qasc/
90
- - **Paper:** [QASC: A Dataset for Question Answering via Sentence Composition](https://arxiv.org/abs/1910.11473)
91
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
92
- - **Size of downloaded dataset files:** 1.54 MB
93
- - **Size of the generated dataset:** 5.60 MB
94
- - **Total amount of disk used:** 7.14 MB
95
-
96
- ### Dataset Summary
97
-
98
- QASC is a question-answering dataset with a focus on sentence composition. It consists of 9,980 8-way multiple-choice
99
- questions about grade school science (8,134 train, 926 dev, 920 test), and comes with a corpus of 17M sentences.
100
-
101
- ### Supported Tasks and Leaderboards
102
-
103
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
104
-
105
- ### Languages
106
-
107
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
108
-
109
- ## Dataset Structure
110
-
111
- ### Data Instances
112
-
113
- #### default
114
-
115
- - **Size of downloaded dataset files:** 1.54 MB
116
- - **Size of the generated dataset:** 5.60 MB
117
- - **Total amount of disk used:** 7.14 MB
118
-
119
- An example of 'validation' looks as follows.
120
- ```
121
- {
122
- "answerKey": "F",
123
- "choices": {
124
- "label": ["A", "B", "C", "D", "E", "F", "G", "H"],
125
- "text": ["sand", "occurs over a wide range", "forests", "Global warming", "rapid changes occur", "local weather conditions", "measure of motion", "city life"]
126
- },
127
- "combinedfact": "Climate is generally described in terms of local weather conditions",
128
- "fact1": "Climate is generally described in terms of temperature and moisture.",
129
- "fact2": "Fire behavior is driven by local weather conditions such as winds, temperature and moisture.",
130
- "formatted_question": "Climate is generally described in terms of what? (A) sand (B) occurs over a wide range (C) forests (D) Global warming (E) rapid changes occur (F) local weather conditions (G) measure of motion (H) city life",
131
- "id": "3NGI5ARFTT4HNGVWXAMLNBMFA0U1PG",
132
- "question": "Climate is generally described in terms of what?"
133
- }
134
- ```
135
-
136
- ### Data Fields
137
-
138
- The data fields are the same among all splits.
139
-
140
- #### default
141
- - `id`: a `string` feature.
142
- - `question`: a `string` feature.
143
- - `choices`: a dictionary feature containing:
144
- - `text`: a `string` feature.
145
- - `label`: a `string` feature.
146
- - `answerKey`: a `string` feature.
147
- - `fact1`: a `string` feature.
148
- - `fact2`: a `string` feature.
149
- - `combinedfact`: a `string` feature.
150
- - `formatted_question`: a `string` feature.
151
-
152
- ### Data Splits
153
-
154
- | name |train|validation|test|
155
- |-------|----:|---------:|---:|
156
- |default| 8134| 926| 920|
157
-
158
- ## Dataset Creation
159
-
160
- ### Curation Rationale
161
-
162
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
163
-
164
- ### Source Data
165
-
166
- #### Initial Data Collection and Normalization
167
-
168
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
169
-
170
- #### Who are the source language producers?
171
-
172
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
173
-
174
- ### Annotations
175
-
176
- #### Annotation process
177
-
178
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
179
-
180
- #### Who are the annotators?
181
-
182
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
183
-
184
- ### Personal and Sensitive Information
185
-
186
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
187
-
188
- ## Considerations for Using the Data
189
-
190
- ### Social Impact of Dataset
191
-
192
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
193
-
194
- ### Discussion of Biases
195
-
196
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
197
-
198
- ### Other Known Limitations
199
-
200
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
201
-
202
- ## Additional Information
203
-
204
- ### Dataset Curators
205
-
206
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
207
-
208
- ### Licensing Information
209
-
210
- The dataset is released under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license.
211
-
212
- ### Citation Information
213
-
214
- ```
215
- @article{allenai:qasc,
216
- author = {Tushar Khot and Peter Clark and Michal Guerquin and Peter Jansen and Ashish Sabharwal},
217
- title = {QASC: A Dataset for Question Answering via Sentence Composition},
218
- journal = {arXiv:1910.11473v2},
219
- year = {2020},
220
- }
221
- ```
222
-
223
- ### Contributions
224
-
225
- Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "\nQASC is a question-answering dataset with a focus on sentence composition. It consists of 9,980 8-way multiple-choice \nquestions about grade school science (8,134 train, 926 dev, 920 test), and comes with a corpus of 17M sentences.\n", "citation": "@article{allenai:qasc,\n author = {Tushar Khot and Peter Clark and Michal Guerquin and Peter Jansen and Ashish Sabharwal},\n title = {QASC: A Dataset for Question Answering via Sentence Composition},\n journal = {arXiv:1910.11473v2},\n year = {2020},\n}\n", "homepage": "https://allenai.org/data/qasc", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "choices": {"feature": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "answerKey": {"dtype": "string", "id": null, "_type": "Value"}, "fact1": {"dtype": "string", "id": null, "_type": "Value"}, "fact2": {"dtype": "string", "id": null, "_type": "Value"}, "combinedfact": {"dtype": "string", "id": null, "_type": "Value"}, "formatted_question": {"dtype": "string", "id": null, "_type": "Value"}}, "supervised_keys": null, "builder_name": "qasc", "config_name": "default", "version": {"version_str": "0.1.0", "description": null, "datasets_version_to_prepare": null, "major": 0, "minor": 1, "patch": 0}, "splits": {"test": {"name": "test", "num_bytes": 393683, "num_examples": 920, "dataset_name": "qasc"}, "train": {"name": "train", "num_bytes": 4919377, "num_examples": 8134, "dataset_name": "qasc"}, "validation": {"name": "validation", "num_bytes": 562352, "num_examples": 926, "dataset_name": "qasc"}}, "download_checksums": {"http://data.allenai.org/downloads/qasc/qasc_dataset.tar.gz": {"num_bytes": 1616514, "checksum": "a7b3f2244f768974c609fd621346c931a72715609f171cb5544fc1da2a2ad55c"}}, "download_size": 1616514, "dataset_size": 5875412, "size_in_bytes": 7491926}}
 
 
default/qasc-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:936e9673979bc53a386ce3d6863a4edf1ecb66b5c134c696b19141b157eb5f2e
3
+ size 158240
default/qasc-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:13abdf558d8451ad915861975db90c77b9dbfc82f51df51e178def8e7d2b23eb
3
+ size 1967903
default/qasc-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2608fd649c99fb47371fc453d27bf1aa5354321607d0b0d049285ccfb35d5494
3
+ size 223552
qasc.py DELETED
@@ -1,123 +0,0 @@
1
- """TODO(qasc): Add a description here."""
2
-
3
-
4
- import json
5
-
6
- import datasets
7
-
8
-
9
- # TODO(qasc): BibTeX citation
10
- _CITATION = """\
11
- @article{allenai:qasc,
12
- author = {Tushar Khot and Peter Clark and Michal Guerquin and Peter Jansen and Ashish Sabharwal},
13
- title = {QASC: A Dataset for Question Answering via Sentence Composition},
14
- journal = {arXiv:1910.11473v2},
15
- year = {2020},
16
- }
17
- """
18
-
19
- # TODO(qasc):
20
- _DESCRIPTION = """
21
- QASC is a question-answering dataset with a focus on sentence composition. It consists of 9,980 8-way multiple-choice
22
- questions about grade school science (8,134 train, 926 dev, 920 test), and comes with a corpus of 17M sentences.
23
- """
24
- _URl = "http://data.allenai.org/downloads/qasc/qasc_dataset.tar.gz"
25
-
26
-
27
- class Qasc(datasets.GeneratorBasedBuilder):
28
- """TODO(qasc): Short description of my dataset."""
29
-
30
- # TODO(qasc): Set up version.
31
- VERSION = datasets.Version("0.1.0")
32
-
33
- def _info(self):
34
- # TODO(qasc): Specifies the datasets.DatasetInfo object
35
- return datasets.DatasetInfo(
36
- # This is the description that will appear on the datasets page.
37
- description=_DESCRIPTION,
38
- # datasets.features.FeatureConnectors
39
- features=datasets.Features(
40
- {
41
- "id": datasets.Value("string"),
42
- "question": datasets.Value("string"),
43
- "choices": datasets.features.Sequence(
44
- {"text": datasets.Value("string"), "label": datasets.Value("string")}
45
- ),
46
- "answerKey": datasets.Value("string"),
47
- "fact1": datasets.Value("string"),
48
- "fact2": datasets.Value("string"),
49
- "combinedfact": datasets.Value("string"),
50
- "formatted_question": datasets.Value("string"),
51
- # These are the features of your dataset like images, labels ...
52
- }
53
- ),
54
- # If there's a common (input, target) tuple from the features,
55
- # specify them here. They'll be used if as_supervised=True in
56
- # builder.as_dataset.
57
- supervised_keys=None,
58
- # Homepage of the dataset for documentation
59
- homepage="https://allenai.org/data/qasc",
60
- citation=_CITATION,
61
- )
62
-
63
- def _split_generators(self, dl_manager):
64
- """Returns SplitGenerators."""
65
- # TODO(qasc): Downloads the data and defines the splits
66
- # dl_manager is a datasets.download.DownloadManager that can be used to
67
- # download and extract URLs
68
- archive = dl_manager.download(_URl)
69
- return [
70
- datasets.SplitGenerator(
71
- name=datasets.Split.TRAIN,
72
- # These kwargs will be passed to _generate_examples
73
- gen_kwargs={
74
- "filepath": "/".join(["QASC_Dataset", "train.jsonl"]),
75
- "files": dl_manager.iter_archive(archive),
76
- },
77
- ),
78
- datasets.SplitGenerator(
79
- name=datasets.Split.TEST,
80
- # These kwargs will be passed to _generate_examples
81
- gen_kwargs={
82
- "filepath": "/".join(["QASC_Dataset", "test.jsonl"]),
83
- "files": dl_manager.iter_archive(archive),
84
- },
85
- ),
86
- datasets.SplitGenerator(
87
- name=datasets.Split.VALIDATION,
88
- # These kwargs will be passed to _generate_examples
89
- gen_kwargs={
90
- "filepath": "/".join(["QASC_Dataset", "dev.jsonl"]),
91
- "files": dl_manager.iter_archive(archive),
92
- },
93
- ),
94
- ]
95
-
96
- def _generate_examples(self, filepath, files):
97
- """Yields examples."""
98
- # TODO(qasc): Yields (key, example) tuples from the dataset
99
- for path, f in files:
100
- if path == filepath:
101
- for row in f:
102
- data = json.loads(row.decode("utf-8"))
103
- answerkey = data.get("answerKey", "")
104
- id_ = data["id"]
105
- question = data["question"]["stem"]
106
- choices = data["question"]["choices"]
107
- text_choices = [choice["text"] for choice in choices]
108
- label_choices = [choice["label"] for choice in choices]
109
- fact1 = data.get("fact1", "")
110
- fact2 = data.get("fact2", "")
111
- combined_fact = data.get("combinedfact", "")
112
- formatted_question = data.get("formatted_question", "")
113
- yield id_, {
114
- "id": id_,
115
- "answerKey": answerkey,
116
- "question": question,
117
- "choices": {"text": text_choices, "label": label_choices},
118
- "fact1": fact1,
119
- "fact2": fact2,
120
- "combinedfact": combined_fact,
121
- "formatted_question": formatted_question,
122
- }
123
- break