|
--- |
|
license: mit |
|
language: |
|
- en |
|
paperswithcode_id: embedding-data/altlex |
|
pretty_name: altlex |
|
--- |
|
|
|
# Dataset Card for "altlex" |
|
|
|
## Table of Contents |
|
- [Dataset Description](#dataset-description) |
|
- [Dataset Summary](#dataset-summary) |
|
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) |
|
- [Languages](#languages) |
|
- [Dataset Structure](#dataset-structure) |
|
- [Data Instances](#data-instances) |
|
- [Data Fields](#data-fields) |
|
- [Data Splits](#data-splits) |
|
- [Dataset Creation](#dataset-creation) |
|
- [Curation Rationale](#curation-rationale) |
|
- [Source Data](#source-data) |
|
- [Annotations](#annotations) |
|
- [Personal and Sensitive Information](#personal-and-sensitive-information) |
|
- [Considerations for Using the Data](#considerations-for-using-the-data) |
|
- [Social Impact of Dataset](#social-impact-of-dataset) |
|
- [Discussion of Biases](#discussion-of-biases) |
|
- [Other Known Limitations](#other-known-limitations) |
|
- [Additional Information](#additional-information) |
|
- [Dataset Curators](#dataset-curators) |
|
- [Licensing Information](#licensing-information) |
|
- [Citation Information](#citation-information) |
|
- [Contributions](#contributions) |
|
|
|
## Dataset Description |
|
|
|
**Homepage:** [https://github.com/chridey/altlex](https://github.com/chridey/altlex) |
|
|
|
**Repository:** [More Information Needed](https://github.com/chridey/altlex) |
|
|
|
**Paper:** [https://aclanthology.org/P16-1135.pdf](https://aclanthology.org/P16-1135.pdf) |
|
|
|
**Point of Contact:** [Christopher Hidey]([email protected]) |
|
|
|
### Dataset Summary |
|
|
|
Git repository for software associated with the 2016 ACL paper "Identifying Causal Relations Using Parallel Wikipedia Articles." |
|
|
|
Disclaimer: The team releasing altlex did not upload the dataset to the Hub and did not write a dataset card. |
|
These steps were done by the Hugging Face team. |
|
|
|
### Supported Tasks and Leaderboards |
|
|
|
[More Information Needed](https://github.com/chridey/altlex) |
|
|
|
### Languages |
|
|
|
[More Information Needed](https://github.com/chridey/altlex) |
|
|
|
## Dataset Structure |
|
|
|
Parallel Wikipedia Format |
|
|
|
This is a gzipped, JSON-formatted file. The "titles" array is the shared title name of the English and Simple Wikipedia articles. |
|
The "articles" array consists of two arrays and each of those arrays must be the same length as the "titles" array and |
|
the indices into these arrays must point to the aligned articles and titles. |
|
Each article within the articles array is an array of tokenized sentence strings (but not word tokenized). |
|
|
|
The format of the dictionary is as follows: |
|
|
|
``` |
|
{"files": [english_name, simple_name], |
|
"articles": [ |
|
[[article_1_sentence_1_string, article_1_sentence_2_string, ...], |
|
[article_2_sentence_1_string, article_2_sentence_2_string, ...], |
|
... |
|
], |
|
[[article_1_sentence_1_string, article_1_sentence_2_string, ...], |
|
[article_2_sentence_1_string, article_2_sentence_2_string, ...], |
|
... |
|
] |
|
], |
|
"titles": [title_1_string, title_2_string, ...] |
|
} |
|
|
|
``` |
|
|
|
Parsed Wikipedia Format |
|
|
|
This is a gzipped, JSON-formatted list of parsed Wikipedia article pairs. |
|
The list stored at 'sentences' is of length 2 and stores each version |
|
of the English and Wikipedia article with the same title. |
|
|
|
The data is formatted as follows: |
|
|
|
``` |
|
[ |
|
{ |
|
"index": article_index, |
|
"title": article_title_string, |
|
"sentences": [[parsed_sentence_1, parsed_sentence_2, ...], |
|
[parsed_sentence_1, parsed_sentence_2, ...] |
|
] |
|
}, |
|
... |
|
] |
|
|
|
``` |
|
|
|
Parsed Pairs Format |
|
|
|
This is a gzipped, JSON-formatted list of parsed sentences. Paraphrase pairs are consecutive |
|
even and odd indices. For the parsed sentence, see "Parsed Sentence Format." |
|
|
|
The data is formatted as follows: |
|
|
|
``` |
|
[ |
|
..., |
|
parsed_sentence_2, |
|
parsed_sentence_3, |
|
... |
|
] |
|
|
|
``` |
|
|
|
Parsed Sentence Format |
|
|
|
Each parsed sentence is of the following format: |
|
|
|
``` |
|
{ |
|
"dep": [[[governor_index, dependent_index, relation_string], ...], ...], |
|
"lemmas": [[lemma_1_string, lemma_2_string, ...], ...], |
|
"pos": [[pos_1_string, pos_2_string, ...], ...], |
|
"parse": [parenthesized_parse_1_string, ...], |
|
"words": [[word_1_string, word_2_string, ...], ...] , |
|
"ner": [[ner_1_string, ner_2_string, ...], ...] |
|
} |
|
|
|
``` |
|
|
|
Feature Extractor Config Format |
|
|
|
``` |
|
{"framenetSettings": |
|
{"binary": true/false}, |
|
"featureSettings": |
|
{ |
|
"arguments_cat_curr": true/false, |
|
"arguments_verbnet_prev": true/false, |
|
"head_word_cat_curr": true/false, |
|
"head_word_verbnet_prev": true/false, |
|
"head_word_verbnet_altlex": true/false, |
|
"head_word_cat_prev": true/false, |
|
"head_word_cat_altlex": true/false, |
|
"kld_score": true/false, |
|
"head_word_verbnet_curr": true/false, |
|
"arguments_verbnet_curr": true/false, |
|
"framenet": true/false, |
|
"arguments_cat_prev": true/false, |
|
"connective": true/false |
|
}, |
|
"kldSettings": |
|
{"kldDir": $kld_name} |
|
} |
|
|
|
``` |
|
|
|
Data Point Format |
|
|
|
It is also possible to run the feature extractor directly on a single data point. |
|
From the featureExtraction module create a FeatureExtractor object and call the method addFeatures |
|
on a DataPoint object (note that this does not create any interaction features, |
|
for that you will also need to call makeInteractionFeatures). |
|
The DataPoint class takes a dictionary as input, in the following format: |
|
|
|
``` |
|
{ |
|
"sentences": {[{"ner": [...], "pos": [...], "words": [...], "stems": [...], "lemmas": [...], "dependencies": [...]}, {...}]} |
|
"altlexLength": integer, |
|
"altlex": {"dependencies": [...]} |
|
} |
|
The sentences list is the pair of sentences/spans where the first span begins with the altlex. Dependencies must be a list where at index i there is a dependency relation string and governor index integer or a NoneType. Index i into the words list is the dependent of this relation. To split single sentence dependency relations, use the function splitDependencies in utils.dependencyUtils. |
|
|
|
``` |
|
|
|
### Curation Rationale |
|
|
|
[More Information Needed](https://github.com/chridey/altlex) |
|
|
|
### Source Data |
|
|
|
#### Initial Data Collection and Normalization |
|
|
|
[More Information Needed](https://github.com/chridey/altlex) |
|
|
|
#### Who are the source language producers? |
|
|
|
[More Information Needed](https://github.com/chridey/altlex) |
|
|
|
### Annotations |
|
|
|
#### Annotation process |
|
|
|
[More Information Needed](https://github.com/chridey/altlex) |
|
|
|
#### Who are the annotators? |
|
|
|
[More Information Needed](https://github.com/chridey/altlex) |
|
|
|
### Personal and Sensitive Information |
|
|
|
[More Information Needed](https://github.com/chridey/altlex) |
|
|
|
## Considerations for Using the Data |
|
|
|
### Social Impact of Dataset |
|
|
|
[More Information Needed](https://github.com/chridey/altlex) |
|
|
|
### Discussion of Biases |
|
|
|
[More Information Needed](https://github.com/chridey/altlex) |
|
|
|
### Other Known Limitations |
|
|
|
[More Information Needed](https://github.com/chridey/altlex) |
|
|
|
## Additional Information |
|
|
|
### Dataset Curators |
|
|
|
[More Information Needed](https://github.com/chridey/altlex) |
|
|
|
### Licensing Information |
|
|
|
[More Information Needed](https://github.com/chridey/altlex) |
|
|
|
### Citation Information |
|
|
|
### Contributions |
|
|
|
Thanks to [@chridey](https://github.com/chridey/altlex/commits?author=chridey) for adding this dataset. |
|
|
|
--- |
|
|