Lurunchik commited on
Commit
27e9c34
·
1 Parent(s): efd96ef

add loading

Browse files
Files changed (1) hide show
  1. README.md +30 -0
README.md CHANGED
@@ -29,6 +29,8 @@ size_categories:
29
  - [Social Impact of Dataset](#social-impact)
30
  - [Discussion of Biases](#biases)
31
  - [Other Known Limitations](#limitations)
 
 
32
 
33
 
34
  <a name="dataset-description"></a>
@@ -137,3 +139,31 @@ The WikiHowQA dataset is derived from WikiHow, a community-driven platform. Whil
137
  <a name="limitations"></a>
138
  ### Other Known Limitations
139
  The dataset only contains 'how-to' questions and their answers. Therefore, it may not be suitable for tasks that require understanding of other types of questions (e.g., why, what, when, who, etc.). Additionally, while the dataset contains a large number of instances, there may still be topics or types of questions that are underrepresented.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  - [Social Impact of Dataset](#social-impact)
30
  - [Discussion of Biases](#biases)
31
  - [Other Known Limitations](#limitations)
32
+ - [Data Loading](#data-loading)
33
+
34
 
35
 
36
  <a name="dataset-description"></a>
 
139
  <a name="limitations"></a>
140
  ### Other Known Limitations
141
  The dataset only contains 'how-to' questions and their answers. Therefore, it may not be suitable for tasks that require understanding of other types of questions (e.g., why, what, when, who, etc.). Additionally, while the dataset contains a large number of instances, there may still be topics or types of questions that are underrepresented.
142
+
143
+ <a name="data-loading"></a>
144
+ ## Data Loading
145
+
146
+ There are two primary ways to load the main dataset part:
147
+
148
+ 1. Directly from the file (if you have the .jsonl file locally, you can load the dataset using the following Python code):
149
+
150
+ ```python
151
+ import json
152
+
153
+ dataset = []
154
+ with open('wikiHowNFQA.jsonl') as f:
155
+ for l in f:
156
+ dataset.append(json.loads(l))
157
+ ```
158
+
159
+ This will result in a list of dictionaries, each representing a single instance in the dataset.
160
+
161
+ 2. From the Hugging Face Datasets Hub:
162
+
163
+ If the dataset is hosted on the Hugging Face Datasets Hub, you can load it directly using the datasets library:
164
+
165
+ ```python
166
+ from datasets import load_dataset
167
+ dataset = load_dataset('wikiHowNFQA')
168
+ ```
169
+ This will return a DatasetDict object, which is a dictionary-like object that maps split names (e.g., 'train', 'validation', 'test') to Dataset objects. You can access a specific split like so: dataset['train'].