espejelomar
commited on
Commit
·
abb46b8
1
Parent(s):
b5281fa
Update README.md
Browse files
README.md
CHANGED
@@ -46,145 +46,28 @@ Git repository for software associated with the 2016 ACL paper "Identifying Caus
|
|
46 |
Disclaimer: The team releasing altlex did not upload the dataset to the Hub and did not write a dataset card.
|
47 |
These steps were done by the Hugging Face team.
|
48 |
|
49 |
-
### Supported Tasks
|
50 |
|
51 |
-
[
|
52 |
|
53 |
### Languages
|
54 |
|
55 |
-
|
56 |
-
|
57 |
-
## Dataset Structure
|
58 |
-
|
59 |
-
Parallel Wikipedia Format
|
60 |
-
|
61 |
-
This is a gzipped, JSON-formatted file. The "titles" array is the shared title name of the English and Simple Wikipedia articles.
|
62 |
-
The "articles" array consists of two arrays and each of those arrays must be the same length as the "titles" array and
|
63 |
-
the indices into these arrays must point to the aligned articles and titles.
|
64 |
-
Each article within the articles array is an array of tokenized sentence strings (but not word tokenized).
|
65 |
-
|
66 |
-
The format of the dictionary is as follows:
|
67 |
-
|
68 |
-
```
|
69 |
-
{"files": [english_name, simple_name],
|
70 |
-
"articles": [
|
71 |
-
[[article_1_sentence_1_string, article_1_sentence_2_string, ...],
|
72 |
-
[article_2_sentence_1_string, article_2_sentence_2_string, ...],
|
73 |
-
...
|
74 |
-
],
|
75 |
-
[[article_1_sentence_1_string, article_1_sentence_2_string, ...],
|
76 |
-
[article_2_sentence_1_string, article_2_sentence_2_string, ...],
|
77 |
-
...
|
78 |
-
]
|
79 |
-
],
|
80 |
-
"titles": [title_1_string, title_2_string, ...]
|
81 |
-
}
|
82 |
-
|
83 |
-
```
|
84 |
-
|
85 |
-
Parsed Wikipedia Format
|
86 |
-
|
87 |
-
This is a gzipped, JSON-formatted list of parsed Wikipedia article pairs.
|
88 |
-
The list stored at 'sentences' is of length 2 and stores each version
|
89 |
-
of the English and Wikipedia article with the same title.
|
90 |
-
|
91 |
-
The data is formatted as follows:
|
92 |
-
|
93 |
-
```
|
94 |
-
[
|
95 |
-
{
|
96 |
-
"index": article_index,
|
97 |
-
"title": article_title_string,
|
98 |
-
"sentences": [[parsed_sentence_1, parsed_sentence_2, ...],
|
99 |
-
[parsed_sentence_1, parsed_sentence_2, ...]
|
100 |
-
]
|
101 |
-
},
|
102 |
-
...
|
103 |
-
]
|
104 |
-
|
105 |
-
```
|
106 |
-
|
107 |
-
Parsed Pairs Format
|
108 |
|
109 |
-
|
110 |
-
even and odd indices. For the parsed sentence, see "Parsed Sentence Format."
|
111 |
|
112 |
-
|
113 |
|
114 |
```
|
115 |
-
[
|
116 |
-
|
117 |
-
|
118 |
-
|
119 |
-
...
|
120 |
-
]
|
121 |
-
|
122 |
-
```
|
123 |
-
|
124 |
-
Parsed Sentence Format
|
125 |
-
|
126 |
-
Each parsed sentence is of the following format:
|
127 |
-
|
128 |
-
```
|
129 |
-
{
|
130 |
-
"dep": [[[governor_index, dependent_index, relation_string], ...], ...],
|
131 |
-
"lemmas": [[lemma_1_string, lemma_2_string, ...], ...],
|
132 |
-
"pos": [[pos_1_string, pos_2_string, ...], ...],
|
133 |
-
"parse": [parenthesized_parse_1_string, ...],
|
134 |
-
"words": [[word_1_string, word_2_string, ...], ...] ,
|
135 |
-
"ner": [[ner_1_string, ner_2_string, ...], ...]
|
136 |
-
}
|
137 |
-
|
138 |
-
```
|
139 |
-
|
140 |
-
Feature Extractor Config Format
|
141 |
-
|
142 |
-
```
|
143 |
-
{"framenetSettings":
|
144 |
-
{"binary": true/false},
|
145 |
-
"featureSettings":
|
146 |
-
{
|
147 |
-
"arguments_cat_curr": true/false,
|
148 |
-
"arguments_verbnet_prev": true/false,
|
149 |
-
"head_word_cat_curr": true/false,
|
150 |
-
"head_word_verbnet_prev": true/false,
|
151 |
-
"head_word_verbnet_altlex": true/false,
|
152 |
-
"head_word_cat_prev": true/false,
|
153 |
-
"head_word_cat_altlex": true/false,
|
154 |
-
"kld_score": true/false,
|
155 |
-
"head_word_verbnet_curr": true/false,
|
156 |
-
"arguments_verbnet_curr": true/false,
|
157 |
-
"framenet": true/false,
|
158 |
-
"arguments_cat_prev": true/false,
|
159 |
-
"connective": true/false
|
160 |
-
},
|
161 |
-
"kldSettings":
|
162 |
-
{"kldDir": $kld_name}
|
163 |
-
}
|
164 |
-
|
165 |
```
|
166 |
|
167 |
-
|
168 |
|
169 |
-
It is also possible to run the feature extractor directly on a single data point.
|
170 |
-
From the featureExtraction module create a FeatureExtractor object and call the method addFeatures
|
171 |
-
on a DataPoint object (note that this does not create any interaction features,
|
172 |
-
for that you will also need to call makeInteractionFeatures).
|
173 |
-
The DataPoint class takes a dictionary as input, in the following format:
|
174 |
-
|
175 |
-
```
|
176 |
-
{
|
177 |
-
"sentences": {[{"ner": [...], "pos": [...], "words": [...], "stems": [...], "lemmas": [...], "dependencies": [...]}, {...}]}
|
178 |
-
"altlexLength": integer,
|
179 |
-
"altlex": {"dependencies": [...]}
|
180 |
-
}
|
181 |
-
The sentences list is the pair of sentences/spans where the first span begins with the altlex. Dependencies must be a list where at index i there is a dependency relation string and governor index integer or a NoneType. Index i into the words list is the dependent of this relation. To split single sentence dependency relations, use the function splitDependencies in utils.dependencyUtils.
|
182 |
|
183 |
-
```
|
184 |
-
|
185 |
-
### Curation Rationale
|
186 |
-
|
187 |
-
[More Information Needed](https://github.com/chridey/altlex)
|
188 |
|
189 |
### Source Data
|
190 |
|
@@ -238,6 +121,6 @@ The sentences list is the pair of sentences/spans where the first span begins wi
|
|
238 |
|
239 |
### Contributions
|
240 |
|
241 |
-
|
242 |
|
243 |
---
|
|
|
46 |
Disclaimer: The team releasing altlex did not upload the dataset to the Hub and did not write a dataset card.
|
47 |
These steps were done by the Hugging Face team.
|
48 |
|
49 |
+
### Supported Tasks
|
50 |
|
51 |
+
- [Sentence Transformers](https://huggingface.co/sentence-transformers) training.
|
52 |
|
53 |
### Languages
|
54 |
|
55 |
+
- English.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
56 |
|
57 |
+
## Dataset Structure: Equivalent sentence pairs.
|
|
|
58 |
|
59 |
+
Each example in the dataset contains a pair of equivalent sentences and is formated as a dictionary:
|
60 |
|
61 |
```
|
62 |
+
{"set": [sentence_1, sentence_2]}
|
63 |
+
{"set": [sentence_1, sentence_2]}
|
64 |
+
...
|
65 |
+
{"set": [sentence_1, sentence_2]}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
66 |
```
|
67 |
|
68 |
+
This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences.
|
69 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
70 |
|
|
|
|
|
|
|
|
|
|
|
71 |
|
72 |
### Source Data
|
73 |
|
|
|
121 |
|
122 |
### Contributions
|
123 |
|
124 |
+
- [@chridey](https://github.com/chridey/altlex/commits?author=chridey) for adding this dataset to Github.
|
125 |
|
126 |
---
|