File size: 9,928 Bytes
bf9e693
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e1d5f20
bf9e693
645f6e8
 
 
12d5ecd
 
645f6e8
 
 
 
bf9e693
645f6e8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
---
dataset_info:
  features:
  - name: uid
    dtype: string
  - name: query
    dtype: string
  - name: question
    dtype: string
  - name: simplified_query
    dtype: string
  - name: answer
    dtype: string
  - name: verbalized_answer
    dtype: string
  - name: verbalized_answer_2
    dtype: string
  - name: verbalized_answer_3
    dtype: string
  - name: verbalized_answer_4
    dtype: string
  - name: verbalized_answer_5
    dtype: string
  - name: verbalized_answer_6
    dtype: string
  - name: verbalized_answer_7
    dtype: string
  - name: verbalized_answer_8
    dtype: string
  splits:
  - name: train
    num_bytes: 2540548
    num_examples: 3500
  - name: validation
    num_bytes: 369571
    num_examples: 500
  - name: test
    num_bytes: 722302
    num_examples: 1000
  download_size: 1750172
  dataset_size: 3632421
task_categories:
- conversational
- question-answering
- text-generation
- text2text-generation
tags:
- qa
- knowledge-graph
- sparql
---

# Dataset Card for ParaQA-SPARQLtoText

## Table of Contents
- [Dataset Card for ParaQA-SPARQLtoText](#dataset-card-for-paraqa-sparqltotext)
  - [Table of Contents](#table-of-contents)
  - [Dataset Description](#dataset-description)
    - [Dataset Summary](#dataset-summary)
      - [New field `simplified_query`](#new-field-simplified_query)
      - [New split "valid"](#new-split-valid)
    - [Languages](#languages)
  - [Dataset Structure](#dataset-structure)
    - [Types of questions](#types-of-questions)
    - [Data splits](#data-splits)
  - [Additional information](#additional-information)
    - [Related datasets](#related-datasets)
    - [Licencing information](#licencing-information)
    - [Citation information](#citation-information)
      - [This version of the corpus (with normalized SPARQL queries)](#this-version-of-the-corpus-with-normalized-sparql-queries)
      - [Original version](#original-version)


## Dataset Description

- **Paper:** [SPARQL-to-Text Question Generation for Knowledge-Based Conversational Applications (AACL-IJCNLP 2022)](https://aclanthology.org/2022.aacl-main.11/)
- **Point of Contact:** GwΓ©nolΓ© LecorvΓ©

### Dataset Summary

Special version of ParaQA with SPARQL queries formatted for the SPARQL-to-Text task

#### New field `simplified_query`

New field is named "simplified_query". It results from applying the following step on the field "query":

* Replacing URIs with a simpler format with prefix "resource:", "property:" and "ontology:".

* Spacing the delimiters `(`, `{`, `.`, `}`, `)`.

* Randomizing the variables names

* Shuffling the clauses

#### New split "valid"

A validation set was randonly extracted from the test set to represent 10% of the whole dataset.

### Languages

- English

## Dataset Structure

### Types of questions

Comparison of question types compared to related datasets:

|                          |                 | [SimpleQuestions](https://huggingface.co/datasets/OrangeInnov/simplequestions-sparqltotext) | [ParaQA](https://huggingface.co/datasets/OrangeInnov/paraqa-sparqltotext) | [LC-QuAD 2.0](https://huggingface.co/datasets/OrangeInnov/lcquad_2.0-sparqltotext) | [CSQA](https://huggingface.co/datasets/OrangeInnov/csqa-sparqltotext) | [WebNLQ-QA](https://huggingface.co/datasets/OrangeInnov/webnlg-qa) |
|--------------------------|-----------------|:---------------:|:------:|:-----------:|:----:|:---------:|
| **Number of triplets in query**   | 1               |        βœ“        |    βœ“   |      βœ“      |   βœ“  |     βœ“     |
|                          | 2               |                 |    βœ“   |      βœ“      |   βœ“  |     βœ“     |
|                          | More            |                 |        |      βœ“      |   βœ“  |     βœ“     |
| **Logical connector between triplets**    | Conjunction     |        βœ“        |    βœ“   |      βœ“      |   βœ“  |     βœ“     |
|                          | Disjunction     |                 |        |             |   βœ“  |     βœ“     |
|                          | Exclusion       |                 |        |             |   βœ“  |     βœ“     |
| **Topology of the query graph**             | Direct          |        βœ“        |    βœ“   |      βœ“      |   βœ“  |     βœ“     |
|                          | Sibling         |                 |    βœ“   |      βœ“      |   βœ“  |     βœ“     |
|                          | Chain           |                 |    βœ“   |      βœ“      |   βœ“  |     βœ“     |
|                          | Mixed           |                 |        |      βœ“      |      |     βœ“     |
|                          | Other           |                 |    βœ“   |      βœ“      |   βœ“  |     βœ“     |
| **Variable typing in the query**      | None            |        βœ“        |    βœ“   |      βœ“      |   βœ“  |     βœ“     |
|                          | Target variable     |                 |    βœ“   |      βœ“      |   βœ“  |     βœ“     |
|                          | Internal variable   |                 |    βœ“   |      βœ“      |   βœ“  |     βœ“     |
| **Comparisons clauses**          | None            |        βœ“        |    βœ“   |      βœ“      |   βœ“  |     βœ“     |
|                          | String          |                 |        |      βœ“      |      |     βœ“     |
|                          | Number          |                 |        |      βœ“      |   βœ“  |     βœ“     |
|                          | Date            |                 |        |      βœ“      |      |     βœ“     |
| **Superlative clauses**          | No              |        βœ“        |    βœ“   |      βœ“      |   βœ“  |     βœ“     |
|                          | Yes             |                 |        |             |   βœ“  |           |
| **Answer type**          | Entity (open)   |        βœ“        |    βœ“   |      βœ“      |   βœ“  |     βœ“     |
|                          | Entity (closed) |                 |        |             |   βœ“  |     βœ“     |
|                          | Number          |                 |        |      βœ“      |   βœ“  |     βœ“     |
|                          | Boolean         |                 |    βœ“   |      βœ“      |   βœ“  |     βœ“     |
| **Answer cardinality**   | 0 (unanswerable)   |                 |        |      βœ“      |      |     βœ“     |
|                          | 1               |        βœ“        |    βœ“   |      βœ“      |   βœ“  |     βœ“     |
|                          | More            |                 |    βœ“   |      βœ“      |   βœ“  |     βœ“     |
| **Number of target variables** | 0 (β‡’ ASK verb)       |                 |    βœ“   |      βœ“      |   βœ“  |     βœ“     |
|                          | 1               |        βœ“        |    βœ“   |      βœ“      |   βœ“  |     βœ“     |
|                          | 2               |                 |        |      βœ“      |      |     βœ“     |
| **Dialogue context**     | Self-sufficient |        βœ“        |    βœ“   |      βœ“      |   βœ“  |     βœ“     |
|                          | Coreference     |                 |        |             |   βœ“  |     βœ“     |
|                          | Ellipsis        |                 |        |             |   βœ“  |     βœ“     |
| **Meaning**              | Meaningful      |        βœ“        |    βœ“   |      βœ“      |   βœ“  |     βœ“     |
|                          | Non-sense       |                 |        |             |      |     βœ“     |


### Data splits

Text verbalization is only available for a subset of the test set, referred to as *challenge set*. Other sample only contain dialogues in the form of follow-up sparql queries.

|                       | Train      | Validation | Test       |
| --------------------- | ---------- | ---------- | ---------- |
| Questions             | 3,500      | 500       | 1,000       |
| NL question per query | 1           |
| Characters per query  | 103 (Β± 27)  |
| Tokens per question   | 10.3 (Β± 3.7) |


## Additional information

### Related datasets

This corpus is part of a set of 5 datasets released for SPARQL-to-Text generation, namely:
  - Non conversational datasets
    - [SimpleQuestions](https://huggingface.co/datasets/OrangeInnov/simplequestions-sparqltotext) (from https://github.com/askplatypus/wikidata-simplequestions)
    - [ParaQA](https://huggingface.co/datasets/OrangeInnov/paraqa-sparqltotext) (from https://github.com/barshana-banerjee/ParaQA)
    - [LC-QuAD 2.0](https://huggingface.co/datasets/OrangeInnov/lcquad_2.0-sparqltotext) (from http://lc-quad.sda.tech/)
  - Conversational datasets
    - [CSQA](https://huggingface.co/datasets/OrangeInnov/csqa-sparqltotext) (from https://amritasaha1812.github.io/CSQA/)
    - [WebNLQ-QA](https://huggingface.co/datasets/OrangeInnov/webnlg-qa) (derived from https://gitlab.com/shimorina/webnlg-dataset/-/tree/master/release_v3.0)

### Licencing information

* Content from original dataset: CC-BY 4.0
* New content: CC BY-SA 4.0


### Citation information


#### This version of the corpus (with normalized SPARQL queries)

```bibtex
@inproceedings{lecorve2022sparql2text,
  title={SPARQL-to-Text Question Generation for Knowledge-Based Conversational Applications},
  author={Lecorv\'e, Gw\'enol\'e and Veyret, Morgan and Brabant, Quentin and Rojas-Barahona, Lina M.},
  journal={Proceedings of the Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing (AACL-IJCNLP)},
  year={2022}
}
```

#### Original version

```bibtex
@inproceedings{kacupaj2021paraqa,
  title={Paraqa: a question answering dataset with paraphrase responses for single-turn conversation},
  author={Kacupaj, Endri and Banerjee, Barshana and Singh, Kuldeep and Lehmann, Jens},
  booktitle={European semantic web conference},
  pages={598--613},
  year={2021},
  organization={Springer}
}

```