license: mit
language:
- en
paperswithcode_id: embedding-data/altlex
pretty_name: altlex
Dataset Card for "altlex"
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: https://github.com/chridey/altlex
- Repository: More Information Needed
- Paper: https://aclanthology.org/P16-1135.pdf Point of Contact: Christopher Hidey
Dataset Summary
Git repository for software associated with the 2016 ACL paper "Identifying Causal Relations Using Parallel Wikipedia Articles."
Supported Tasks and Leaderboards
Languages
Dataset Structure
Parallel Wikipedia Format
This is a gzipped, JSON-formatted file. The "titles" array is the shared title name of the English and Simple Wikipedia articles. The "articles" array consists of two arrays and each of those arrays must be the same length as the "titles" array and the indices into these arrays must point to the aligned articles and titles. Each article within the articles array is an array of tokenized sentence strings (but not word tokenized).
The format of the dictionary is as follows:
{"files": [english_name, simple_name],
"articles": [
[[article_1_sentence_1_string, article_1_sentence_2_string, ...],
[article_2_sentence_1_string, article_2_sentence_2_string, ...],
...
],
[[article_1_sentence_1_string, article_1_sentence_2_string, ...],
[article_2_sentence_1_string, article_2_sentence_2_string, ...],
...
]
],
"titles": [title_1_string, title_2_string, ...]
}
Parsed Wikipedia Format
This is a gzipped, JSON-formatted list of parsed Wikipedia article pairs. The list stored at 'sentences' is of length 2 and stores each version of the English and Wikipedia article with the same title.
The data is formatted as follows:
[
{
"index": article_index,
"title": article_title_string,
"sentences": [[parsed_sentence_1, parsed_sentence_2, ...],
[parsed_sentence_1, parsed_sentence_2, ...]
]
},
...
]
Parsed Pairs Format
This is a gzipped, JSON-formatted list of parsed sentences. Paraphrase pairs are consecutive even and odd indices. For the parsed sentence, see "Parsed Sentence Format."
The data is formatted as follows:
[
...,
parsed_sentence_2,
parsed_sentence_3,
...
]
Parsed Sentence Format
Each parsed sentence is of the following format:
{
"dep": [[[governor_index, dependent_index, relation_string], ...], ...],
"lemmas": [[lemma_1_string, lemma_2_string, ...], ...],
"pos": [[pos_1_string, pos_2_string, ...], ...],
"parse": [parenthesized_parse_1_string, ...],
"words": [[word_1_string, word_2_string, ...], ...] ,
"ner": [[ner_1_string, ner_2_string, ...], ...]
}
Feature Extractor Config Format
{"framenetSettings":
{"binary": true/false},
"featureSettings":
{
"arguments_cat_curr": true/false,
"arguments_verbnet_prev": true/false,
"head_word_cat_curr": true/false,
"head_word_verbnet_prev": true/false,
"head_word_verbnet_altlex": true/false,
"head_word_cat_prev": true/false,
"head_word_cat_altlex": true/false,
"kld_score": true/false,
"head_word_verbnet_curr": true/false,
"arguments_verbnet_curr": true/false,
"framenet": true/false,
"arguments_cat_prev": true/false,
"connective": true/false
},
"kldSettings":
{"kldDir": $kld_name}
}
Data Point Format
It is also possible to run the feature extractor directly on a single data point. From the featureExtraction module create a FeatureExtractor object and call the method addFeatures on a DataPoint object (note that this does not create any interaction features, for that you will also need to call makeInteractionFeatures). The DataPoint class takes a dictionary as input, in the following format:
{
"sentences": {[{"ner": [...], "pos": [...], "words": [...], "stems": [...], "lemmas": [...], "dependencies": [...]}, {...}]}
"altlexLength": integer,
"altlex": {"dependencies": [...]}
}
The sentences list is the pair of sentences/spans where the first span begins with the altlex. Dependencies must be a list where at index i there is a dependency relation string and governor index integer or a NoneType. Index i into the words list is the dependent of this relation. To split single sentence dependency relations, use the function splitDependencies in utils.dependencyUtils.