Datasets:
Tasks:
Text Classification
Sub-tasks:
multi-class-classification
Size:
100K<n<1M
ArXiv:
Tags:
relation extraction
License:
Update README.md
Browse files
README.md
CHANGED
@@ -120,7 +120,7 @@ The data fields are the same among all splits.
|
|
120 |
|
121 |
|
122 |
- `id`: the instance id of this sentence, a `string` feature.
|
123 |
-
- `token`: the list of tokens of this sentence,
|
124 |
- `relation`: the relation label of this instance, a `string` classification label.
|
125 |
- `subj_start`: the 0-based index of the start token of the relation subject mention, an `ìnt` feature.
|
126 |
- `subj_end`: the 0-based index of the end token of the relation subject mention, exclusive, an `ìnt` feature.
|
@@ -164,6 +164,8 @@ See the Stanford paper and the Tacred Revisited paper, plus their appendices.
|
|
164 |
To ensure that models trained on TACRED are not biased towards predicting false positives on real-world text,
|
165 |
all sampled sentences where no relation was found between the mention pairs were fully annotated to be negative examples. As a result, 79.5% of the examples
|
166 |
are labeled as no_relation.
|
|
|
|
|
167 |
#### Who are the annotators?
|
168 |
[More Information Needed]
|
169 |
### Personal and Sensitive Information
|
|
|
120 |
|
121 |
|
122 |
- `id`: the instance id of this sentence, a `string` feature.
|
123 |
+
- `token`: the list of tokens of this sentence, a `list` of `string` features.
|
124 |
- `relation`: the relation label of this instance, a `string` classification label.
|
125 |
- `subj_start`: the 0-based index of the start token of the relation subject mention, an `ìnt` feature.
|
126 |
- `subj_end`: the 0-based index of the end token of the relation subject mention, exclusive, an `ìnt` feature.
|
|
|
164 |
To ensure that models trained on TACRED are not biased towards predicting false positives on real-world text,
|
165 |
all sampled sentences where no relation was found between the mention pairs were fully annotated to be negative examples. As a result, 79.5% of the examples
|
166 |
are labeled as no_relation.
|
167 |
+
|
168 |
+
Tokenization of the English data was done with Stanford CoreNLP by the authors of the original dataset. The translated versions were tokenized with language-specific Spacy models (Spacy 3.1) or Trankit when there was no Spacy model for a given language (Hungarian, Turkish, Arabic, Hindi).
|
169 |
#### Who are the annotators?
|
170 |
[More Information Needed]
|
171 |
### Personal and Sensitive Information
|