taidng commited on
Commit
8480be6
·
1 Parent(s): 58e914e

update readme and processing script

Browse files
Files changed (2) hide show
  1. README.md +88 -0
  2. process_viquad.py +77 -0
README.md CHANGED
@@ -46,4 +46,92 @@ configs:
46
  path: data/validation-*
47
  - split: test
48
  path: data/test-*
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
  path: data/validation-*
47
  - split: test
48
  path: data/test-*
49
+ annotations_creators:
50
+ - crowdsourced
51
+ language_creators:
52
+ - crowdsourced
53
+ - found
54
+ language:
55
+ - vi
56
+ license:
57
+ -
58
+ multilinguality:
59
+ - monolingual
60
+ paperswithcode_id: null
61
+ pretty_name: "JaQuAD: Japanese Question Answering Dataset"
62
+ task_categories:
63
+ - question-answering
64
+ task_ids:
65
+ - extractive-qa
66
  ---
67
+ # Vietnamese Question Answering Dataset
68
+
69
+ ## Dataset Card for JaQuAD
70
+ ### Dataset Summary
71
+ The HF version for Vietnamese QA dataset created by [Nguyen et al. (2020)](https://aclanthology.org/2020.coling-main.233/) and released in the [shared task](https://arxiv.org/abs/2203.11400).
72
+
73
+ The original UIT-ViQuAD contains over 23,000 human-generated question-answer pairs based on 5,109 passages of 174 Vietnamese articles. UIT-ViQuAD2.0 adds over 12K unanswerable questions for the same passage.
74
+
75
+ Processed: The dataset has been processed to remove a few duplicated questions and answers.
76
+
77
+ Questions about the private test set or the dataset should be directed to the authors.
78
+
79
+ ### Languages
80
+
81
+ Vietnamese (`vi`)
82
+
83
+
84
+ ## Dataset Creation
85
+
86
+ ### Source Data
87
+
88
+ Vietnamese Wikipedia
89
+
90
+ ### Annotations
91
+ Human annotators
92
+
93
+ ### Citation Information
94
+ Original dataset:
95
+
96
+ ```bibtex
97
+ @inproceedings{nguyen-etal-2020-vietnamese,
98
+ title = "A {V}ietnamese Dataset for Evaluating Machine Reading Comprehension",
99
+ author = "Nguyen, Kiet and
100
+ Nguyen, Vu and
101
+ Nguyen, Anh and
102
+ Nguyen, Ngan",
103
+ editor = "Scott, Donia and
104
+ Bel, Nuria and
105
+ Zong, Chengqing",
106
+ booktitle = "Proceedings of the 28th International Conference on Computational Linguistics",
107
+ month = dec,
108
+ year = "2020",
109
+ address = "Barcelona, Spain (Online)",
110
+ publisher = "International Committee on Computational Linguistics",
111
+ url = "https://aclanthology.org/2020.coling-main.233",
112
+ doi = "10.18653/v1/2020.coling-main.233",
113
+ pages = "2595--2605",
114
+ abstract = "Over 97 million inhabitants speak Vietnamese as the native language in the world. However, there are few research studies on machine reading comprehension (MRC) in Vietnamese, the task of understanding a document or text, and answering questions related to it. Due to the lack of benchmark datasets for Vietnamese, we present the Vietnamese Question Answering Dataset (UIT-ViQuAD), a new dataset for the low-resource language as Vietnamese to evaluate MRC models. This dataset comprises over 23,000 human-generated question-answer pairs based on 5,109 passages of 174 Vietnamese articles from Wikipedia. In particular, we propose a new process of dataset creation for Vietnamese MRC. Our in-depth analyses illustrate that our dataset requires abilities beyond simple reasoning like word matching and demands complicate reasoning such as single-sentence and multiple-sentence inferences. Besides, we conduct experiments on state-of-the-art MRC methods in English and Chinese as the first experimental models on UIT-ViQuAD, which will be compared to further models. We also estimate human performances on the dataset and compare it to the experimental results of several powerful machine models. As a result, the substantial differences between humans and the best model performances on the dataset indicate that improvements can be explored on UIT-ViQuAD through future research. Our dataset is freely available to encourage the research community to overcome challenges in Vietnamese MRC.",
115
+ }
116
+ ```
117
+
118
+ Shared task where version 2.0 was published:
119
+ ```bibtex
120
+ @article{Nguyen_2022,
121
+ title={VLSP 2021-ViMRC Challenge: Vietnamese Machine Reading Comprehension},
122
+ volume={38},
123
+ ISSN={2615-9260},
124
+ url={http://dx.doi.org/10.25073/2588-1086/vnucsce.340},
125
+ DOI={10.25073/2588-1086/vnucsce.340},
126
+ number={2},
127
+ journal={VNU Journal of Science: Computer Science and Communication Engineering},
128
+ publisher={Vietnam National University Journal of Science},
129
+ author={Nguyen, Kiet and Tran, Son Quoc and Nguyen, Luan Thanh and Huynh, Tin Van and Luu, Son Thanh and Nguyen, Ngan Luu-Thuy},
130
+ year={2022},
131
+ month=dec }
132
+
133
+ ```
134
+
135
+ ### Acknowledgements
136
+
137
+ We thank the authors of ViQuAD for releasing this dataset to the community.
process_viquad.py ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Script used to process UIT-ViQuAD 2.0.
3
+ Source: https://github.com/tuanbc88/ai_question_answering/tree/master/machine_reading_comprehension/02_datasets
4
+ """
5
+ import os
6
+ import json
7
+ import pandas as pd
8
+ from itertools import groupby
9
+ from datasets import Dataset, DatasetDict
10
+
11
+ def deduplicate_answers(answers):
12
+ answers_sorted = sorted(answers, key=lambda x: (x['text'], x['answer_start']))
13
+ unique_answers = [next(group) for _, group in groupby(answers_sorted, key=lambda x: (x['text'], x['answer_start']))]
14
+ return unique_answers
15
+
16
+ data_dir = "UIT-ViQuAD 2.0"
17
+ dataset_dict = {}
18
+
19
+ for split in ["train", "dev", "test"]:
20
+ fname = os.path.join(data_dir, f"{split}.json")
21
+ data = json.load(open(fname))
22
+ rows = []
23
+ title_i = 0
24
+
25
+ for title_data in data["data"]:
26
+ title = title_data["title"]
27
+ ctx_i = 0
28
+ title_i += 1
29
+
30
+ for ctx_and_qs in title_data["paragraphs"]:
31
+ questions = ctx_and_qs["qas"]
32
+ context = ctx_and_qs["context"]
33
+ q_i = 0
34
+ ctx_i += 1
35
+ question_set = set()
36
+ # define default wherever answer is empty
37
+ answer_default: list = [{'answer_start': -1, 'text': ''}]
38
+ for q in questions:
39
+ question = q["question"]
40
+ answers = q["answers"] if "answers" in q else answer_default
41
+ plausible_answers = q["plausible_answers"] if "plausible_answers" in q else answer_default
42
+ # Dedup answers
43
+ answers = deduplicate_answers(answers)
44
+ plausible_answers = deduplicate_answers(plausible_answers)
45
+ uit_id = q["id"]
46
+ is_impossible = q["is_impossible"] if "is_impossible" in q else False
47
+
48
+ # Check duplicate questions
49
+ if question in question_set:
50
+ print("---Found duplicate question: ", question, "---")
51
+ print("Answer: ", answers)
52
+ print("Answer plaus: ", plausible_answers)
53
+ print("Impossible: ", is_impossible)
54
+ continue
55
+
56
+ q_i += 1
57
+ overall_id = f"{title_i:04d}-{ctx_i:04d}-{q_i:04d}"
58
+ rows.append({
59
+ "id": overall_id,
60
+ "uit_id": uit_id,
61
+ "title": title,
62
+ "context": context,
63
+ "question": question,
64
+ "answers": answers,
65
+ "is_impossible": is_impossible,
66
+ "plausible_answers": plausible_answers
67
+ })
68
+ question_set.add(question)
69
+ # Convert to Dataset
70
+ df = pd.DataFrame(rows)
71
+ dataset_dict[split if split!="dev" else "validation"] = Dataset.from_pandas(df)
72
+
73
+ print(dataset_dict)
74
+ hf_dataset = DatasetDict(dataset_dict)
75
+ hf_name = "UIT-ViQuAD2.0"
76
+ hf_dataset.push_to_hub(f"taidng/{hf_name}")
77
+ print("Dataset uploaded successfully!")