File size: 8,647 Bytes
5670837
 
c70a015
 
 
 
 
 
 
 
 
 
5670837
c70a015
 
 
 
87878d4
 
 
 
 
c70a015
87878d4
 
 
 
 
 
 
 
27e9c34
 
87878d4
c70a015
 
 
 
87878d4
 
 
 
 
 
 
c70a015
 
 
87878d4
 
c70a015
87878d4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7d7c5e1
87878d4
3c5780a
c70a015
 
87878d4
c70a015
 
 
 
 
 
87878d4
 
 
a4b1e67
 
 
87878d4
 
 
 
c70a015
 
 
 
 
87878d4
 
 
 
 
 
 
c70a015
87878d4
a4b1e67
 
87878d4
 
 
 
 
 
 
 
 
 
 
 
efd96ef
87878d4
 
 
 
 
 
 
c70a015
87878d4
 
 
c70a015
87878d4
 
 
27e9c34
 
 
 
a4b1e67
27e9c34
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
---
license: cc-by-4.0
task_categories:
- question-answering
language:
- en
tags:
- multi-document NFQA
- non-factoid QA
pretty_name: wikihowqa
size_categories:
- 10K<n<100K
---
# Dataset Card for WikiHowQA

## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
  - [Data Fields](#data-fields)
  - [Data Instances](#data-instances)
  - [Data Statistics](#data-statistics)
  - [Dataset Information](#dataset-information)
- [Dataset Usage](#dataset-usage)
- [Additional Information](#additional-information)
  - [Dataset Curators](#curators)
  - [Licensing Information](#license)
  - [Citation Information](#citation)
- [Considerations for Using the Data](#considerations)
  - [Social Impact of Dataset](#social-impact)
  - [Discussion of Biases](#biases)
  - [Other Known Limitations](#limitations)
- [Data Loading](#data-loading)



<a name="dataset-description"></a>
## Dataset Description

- **Homepage:** [WikiHowQA Dataset](https://lurunchik.github.io/WikiHowQA/)
- **Repository:** [WikiHowQA Repository](https://github.com/lurunchik/WikiHowQA)
- **Paper:** [WikiHowQA Paper](https://lurunchik.github.io/WikiHowQA/data/ACL_MD_NFQA_dataset.pdf)
- **Leaderboard:** [WikiHowQA Leaderboard](https://lurunchik.github.io/WikiHowQA/leaderboard)
- **Point of Contact:** [Contact](mailto:[email protected])


**WikiHowQA** is a unique collection of 'how-to' content from WikiHow, transformed into a rich dataset featuring 11,746 human-authored answers and 74,527 supporting documents. Designed for researchers, it presents a unique opportunity to tackle the challenges of creating comprehensive answers from multiple documents, and grounding those answers in the real-world context provided by the supporting documents.


<a name="dataset-structure"></a>
## Dataset Structure

### Data Fields

- `article_id`: An integer identifier for the article corresponding to article_id from WikHow API.
- `question`: The non-factoid instructional question.
- `answer`: The human-written answer to the question corresponding human-written answer article summary from [WikiHow website](https://www.wikihow.com/Main-Page).
- `related_document_urls_wayback_snapshots`: A list of URLs to web archive snapshots of related documents corresponding references from WikiHow article.
- `split`: The split of the dataset that the instance belongs to ('train', 'validation', or 'test').
- `cluster`: An integer identifier for the cluster that the instance belongs to. <!-- The dataset is split into 'train', 'validation', and 'test' such that all instances from the same cluster belong to the same split. This is to ensure that there is no intersection of paraphrased questions across different splits. If you plan to create a new split of the dataset, it is important to maintain this clustering to avoid data leakage between splits. -->

<a name="dataset-instances"></a>
### Data Instances

An example instance from the WikiHowQA dataset:

```json
{
  'article_id': 1353800,
  'question': 'How To Cook Pork Tenderloin',
  'answer': 'To cook pork tenderloin, put it in a roasting pan and cook it in the oven for 55 minutes at 400 degrees Fahrenheit, turning it over halfway through. You can also sear the pork tenderloin on both sides in a skillet before putting it in the oven, which will reduce the cooking time to 15 minutes. If you want to grill pork tenderloin, start by preheating the grill to medium-high heat. Then, cook the tenderloin on the grill for 30-40 minutes over indirect heat, flipping it occasionally.',
  'related_document_urls_wayback_snapshots': ['http://web.archive.org/web/20210605161310/https://www.allrecipes.com/recipe/236114/pork-roast-with-the-worlds-best-rub/', 'http://web.archive.org/web/20210423074902/https://www.bhg.com/recipes/how-to/food-storage-safety/using-a-meat-thermometer/', ...],
  'split': 'train',
  'cluster': 2635
}
```

<a name="dataset-statistics"></a>
### Dataset Statistics

- Number of human-authored answers: 11,746
- Number of supporting documents: 74,527
- Average number of documents per question: 6.3
- Average number of sentences per answer: 3.9

<a name="dataset-information"></a>
### Dataset Information

The WikiHowQA dataset is divided into two parts: the QA part and the Document Content part.
The QA part of the dataset contains questions, answers, and only links to web archive snapshots of related HTML pages and can be downloaded here. 
The Document Content part contains parsed HTML content and is accessible by request and signing a Data Transfer Agreement with RMIT University.

Each dataset instance includes a question, a set of related documents, and a human-authored answer. The questions are non-factoid, requiring comprehensive, multi-sentence answers. The related documents provide the necessary information to generate an answer.


<a name="dataset-usage"></a>
## Dataset Usage

The dataset is designed for researchers and presents a unique opportunity to tackle the challenges of creating comprehensive answers from multiple documents, and grounding those answers in the real-world context provided by the supporting documents.

<a name="additional-information"></a>
## Additional Information

<a name="curators"></a>
### Dataset Curators
The WikiHowQA dataset was curated by researchers at RMIT University.

<a name="license"></a>
### Licensing Information
The QA dataset part is distributed under the Creative Commons Attribution (CC BY) license. 
The Dataset Content part containing parsed HTML content is accessible by request and signing a Data Transfer Agreement with RMIT University, which allows using the dataset freely for research purposes. The form to download and sign is available on the dataset website by the link [].

<a name="citation"></a>
### Citation Information
Please cite the following paper if you use this dataset:

```bibtex
@inproceedings{bolotova2023wikihowqa,
      title={WikiHowQA: A Comprehensive Benchmark for Multi-Document Non-Factoid Question Answering}, 
      author={Bolotova, Valeriia and Blinov, Vladislav and Filippova, Sofya and Scholer, Falk and Sanderson, Mark},
      booktitle="Proceedings of the 61th Conference of the Association for Computational Linguistics",
      year={2023}
}
```

<a name="considerations"></a>
## Considerations for Using the Data

<a name="social-impact"></a>
### Social Impact of the Dataset
The WikiHowQA dataset is a rich resource for researchers interested in question answering, information retrieval, and natural language understanding tasks. It can help in developing models that provide comprehensive answers to how-to questions, which can be beneficial in various applications such as customer support, tutoring systems, and personal assistants. However, as with any dataset, the potential for misuse or unintended consequences exists. For example, a model trained on this dataset might be used to generate misleading or incorrect answers if not properly validated.

<a name="biases"></a>
### Discussion of Biases
The WikiHowQA dataset is derived from WikiHow, a community-driven platform. While WikiHow has guidelines to ensure the quality and neutrality of its content, biases could still be present due to the demographic and ideological characteristics of its contributors. Users of the dataset should be aware of this potential bias.

<a name="limitations"></a>
### Other Known Limitations
The dataset only contains 'how-to' questions and their answers. Therefore, it may not be suitable for tasks that require understanding of other types of questions (e.g., why, what, when, who, etc.). Additionally, while the dataset contains a large number of instances, there may still be topics or types of questions that are underrepresented.

<a name="data-loading"></a>
## Data Loading

There are two primary ways to load the QA dataset part:

1. Directly from the file (if you have the .jsonl file locally, you can load the dataset using the following Python code):

```python
import json

dataset = []
with open('wikiHowNFQA.jsonl') as f:
    for l in f:
        dataset.append(json.loads(l))
```

This will result in a list of dictionaries, each representing a single instance in the dataset.

2. From the Hugging Face Datasets Hub:

If the dataset is hosted on the Hugging Face Datasets Hub, you can load it directly using the datasets library:

```python
from datasets import load_dataset
dataset = load_dataset('wikiHowNFQA')
```
This will return a DatasetDict object, which is a dictionary-like object that maps split names (e.g., 'train', 'validation', 'test') to Dataset objects. You can access a specific split like so: dataset['train'].