pietrolesci commited on
Commit
448e6d5
·
1 Parent(s): cda6bc6

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +156 -0
README.md ADDED
@@ -0,0 +1,156 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Overview
2
+ Original dataset [here](https://github.com/decompositional-semantics-initiative/DNC).
3
+
4
+ This dataset has been proposed in [Collecting Diverse Natural Language Inference Problems for Sentence Representation Evaluation](https://www.aclweb.org/anthology/D18-1007/).
5
+
6
+
7
+ ## Dataset curation
8
+ This version of the dataset does not include the `type-of-inference` "KG" as its label set is
9
+ `[1, 2, 3, 4, 5]` while here we focus on NLI-related label sets, i.e. `[entailed, not-entailed]`.
10
+ For this reason, I named the dataset DNLI for _Diverse_ NLI, as in [Liu et al 2020](https://aclanthology.org/2020.conll-1.48/), instead of DNC.
11
+
12
+ This version of the dataset contains columns from the `*_data.json` and the `*_metadata.json` files available in the repo.
13
+ In the original repo, each data file has the following keys and values:
14
+
15
+ - `context`: The context sentence for the NLI pair. The context is already tokenized.
16
+ - `hypothesis`: The hypothesis sentence for the NLI pair. The hypothesis is already tokenized.
17
+ - `label`: The label for the NLI pair
18
+ - `label-set`: The set of possible labels for the specific NLI pair
19
+ - `binary-label`: A `True` or `False` label. See the paper for details on how we convert the `label` into a binary label.
20
+ - `split`: This can be `train`, `dev`, or `test`.
21
+ - `type-of-inference`: A string indicating what type of inference is tested in this example.
22
+ - `pair-id`: A unique integer id for the NLI pair. The `pair-id` is used to find the corresponding metadata for any given NLI pair
23
+
24
+ while each metadata file has the following columns
25
+
26
+ - `pair-id`: A unique integer id for the NLI pair.
27
+ - `corpus`: The original corpus where this example came from.
28
+ - `corpus-sent-id`: The id of the sentence (or example) in the original dataset that we recast.
29
+ - `corpus-license`: The license for the data from the original dataset.
30
+ - `creation-approach`: Determines the method used to recast this example. Options are `automatic`, `manual`, or `human-labeled`.
31
+ - `misc`: A dictionary of other relevant information. This is an optional field.
32
+
33
+ The files are merged on the `pair-id` key. I **do not** include the `misc` column as it is not essential for NLI.
34
+
35
+ NOTE: the label mapping is **not** the custom (i.e., 3 class) for NLI tasks. They used a binary target and I encoded them
36
+ with the following mapping `{"not-entailed": 0, "entailed": 1}`.
37
+
38
+ NOTE: some instances are present in multiple splits (matching performed by exact matching on "context", "hypothesis", and "label").
39
+
40
+ ## Code to create the dataset
41
+ ```python
42
+ import pandas as pd
43
+ from datasets import Dataset, ClassLabel, Value, Features, DatasetDict, Sequence
44
+ from pathlib import Path
45
+
46
+
47
+ paths = {
48
+ "train": "<path_to_folder>/DNC-master/train",
49
+ "dev": "<path_to_folder>/DNC-master/dev",
50
+ "test": "<path_to_folder>/DNC-master/test",
51
+ }
52
+
53
+ # read all data files
54
+ dfs = []
55
+ for split, path in paths.items():
56
+ for f_name in Path(path).rglob("*_data.json"):
57
+ df = pd.read_json(str(f_name))
58
+ df["file_split_data"] = split
59
+ dfs.append(df)
60
+ data = pd.concat(dfs, ignore_index=False, axis=0)
61
+
62
+ # read all metadata files
63
+ meta_dfs = []
64
+ for split, path in paths.items():
65
+ for f_name in Path(path).rglob("*_metadata.json"):
66
+ df = pd.read_json(str(f_name))
67
+ meta_dfs.append(df)
68
+ metadata = pd.concat(meta_dfs, ignore_index=False, axis=0)
69
+
70
+ # merge
71
+ dataset = pd.merge(data, metadata, on="pair-id", how="left")
72
+
73
+ # check that the split column reflects file splits
74
+ assert sum(dataset["split"] != dataset["file_split_data"]) == 0
75
+ dataset = dataset.drop(columns=["file_split_data"])
76
+
77
+ # fix `binary-label` column
78
+ dataset.loc[~dataset["label"].isin(["entailed", "not-entailed"]), "binary-label"] = False
79
+ dataset.loc[dataset["label"].isin(["entailed", "not-entailed"]), "binary-label"] = True
80
+
81
+ # fix datatype
82
+ dataset["corpus-sent-id"] = dataset["corpus-sent-id"].astype(str)
83
+
84
+ # order columns as shown in the README.md
85
+ columns = [
86
+ "context",
87
+ "hypothesis",
88
+ "label",
89
+ "label-set",
90
+ "binary-label",
91
+ "split",
92
+ "type-of-inference",
93
+ "pair-id",
94
+ "corpus",
95
+ "corpus-sent-id",
96
+ "corpus-license",
97
+ "creation-approach",
98
+ "misc",
99
+ ]
100
+ dataset = dataset.loc[:, columns]
101
+
102
+ # remove misc column
103
+ dataset = dataset.drop(columns=["misc"])
104
+
105
+ # remove KG for NLI
106
+ dataset.loc[(dataset["label"].isin([1, 2, 3, 4, 5])), "type-of-inference"].value_counts()
107
+ # > the only split with label-set [1, 2, 3, 4, 5], so remove as we focus on NLI
108
+ dataset = dataset.loc[~(dataset["type-of-inference"] == "KG")]
109
+
110
+ # encode labels
111
+ dataset["label"] = dataset["label"].map({"not-entailed": 0, "entailed": 1})
112
+
113
+ # fill NA in label-set
114
+ dataset["label-set"] = dataset["label-set"].ffill()
115
+
116
+ features = Features(
117
+ {
118
+ "context": Value(dtype="string"),
119
+ "hypothesis": Value(dtype="string"),
120
+ "label": ClassLabel(num_classes=2, names=["not-entailed", "entailed"]),
121
+ "label-set": Sequence(length=2, feature=Value(dtype="string")),
122
+ "binary-label": Value(dtype="bool"),
123
+ "split": Value(dtype="string"),
124
+ "type-of-inference": Value(dtype="string"),
125
+ "pair-id": Value(dtype="int64"),
126
+ "corpus": Value(dtype="string"),
127
+ "corpus-sent-id": Value(dtype="string"),
128
+ "corpus-license": Value(dtype="string"),
129
+ "creation-approach": Value(dtype="string"),
130
+ }
131
+ )
132
+
133
+ dataset_splits = {}
134
+ for split in ("train", "dev", "test"):
135
+ df_split = dataset.loc[dataset["split"] == split]
136
+ dataset_splits[split] = Dataset.from_pandas(df_split, features=features)
137
+
138
+ dataset_splits = DatasetDict(dataset_splits)
139
+ dataset_splits.push_to_hub("pietrolesci/dnli", token="<your token>")
140
+
141
+ # check overlap between splits
142
+ from itertools import combinations
143
+ for i, j in combinations(dataset_splits.keys(), 2):
144
+ print(
145
+ f"{i} - {j}: ",
146
+ pd.merge(
147
+ dataset_splits[i].to_pandas(),
148
+ dataset_splits[j].to_pandas(),
149
+ on=["context", "hypothesis", "label"],
150
+ how="inner",
151
+ ).shape[0],
152
+ )
153
+ #> train - dev: 127
154
+ #> train - test: 55
155
+ #> dev - test: 54
156
+ ```