Datasets:
Tasks:
Text Classification
Sub-tasks:
text-scoring
Languages:
English
Size:
1M<n<10M
Tags:
toxicity-prediction
License:
File size: 7,314 Bytes
700e2fa |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 |
---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
languages:
- en
licenses:
- cc0-1-0
multilinguality:
- monolingual
pretty_name: Jigsaw Unintended Bias in Toxicity Classification
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-scoring
task_ids:
- text-scoring-other-toxicity-prediction
---
# Dataset Card for Jigsaw Unintended Bias in Toxicity Classification
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage: https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification **
- **Repository: N/A **
- **Paper: N/A **
- **Leaderboard: https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/leaderboard **
- **Point of Contact: N/A **
### Dataset Summary
The Jigsaw Unintended Bias in Toxicity Classification dataset comes from the eponymous Kaggle competition.
Please see the original [data](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data)
description for more information.
### Supported Tasks and Leaderboards
The main target for this dataset is toxicity prediction. Several toxicity subtypes are also available, so the dataset
can be used for multi-attribute prediction.
See the original [leaderboard](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/leaderboard)
for reference.
### Languages
English
## Dataset Structure
### Data Instances
A data point consists of an id, a comment, the main target, the other toxicity subtypes as well as identity attributes.
For instance, here's the first train example.
```
{
"article_id": 2006,
"asian": NaN,
"atheist": NaN,
"bisexual": NaN,
"black": NaN,
"buddhist": NaN,
"christian": NaN,
"comment_text": "This is so cool. It's like, 'would you want your mother to read this??' Really great idea, well done!",
"created_date": "2015-09-29 10:50:41.987077+00",
"disagree": 0,
"female": NaN,
"funny": 0,
"heterosexual": NaN,
"hindu": NaN,
"homosexual_gay_or_lesbian": NaN,
"identity_annotator_count": 0,
"identity_attack": 0.0,
"insult": 0.0,
"intellectual_or_learning_disability": NaN,
"jewish": NaN,
"latino": NaN,
"likes": 0,
"male": NaN,
"muslim": NaN,
"obscene": 0.0,
"other_disability": NaN,
"other_gender": NaN,
"other_race_or_ethnicity": NaN,
"other_religion": NaN,
"other_sexual_orientation": NaN,
"parent_id": NaN,
"physical_disability": NaN,
"psychiatric_or_mental_illness": NaN,
"publication_id": 2,
"rating": 0,
"sad": 0,
"severe_toxicity": 0.0,
"sexual_explicit": 0.0,
"target": 0.0,
"threat": 0.0,
"toxicity_annotator_count": 4,
"transgender": NaN,
"white": NaN,
"wow": 0
}
```
### Data Fields
- `id`: id of the comment
- `target`: value between 0(non-toxic) and 1(toxic) classifying the comment
- `comment_text`: the text of the comment
- `severe_toxicity`: value between 0(non-severe_toxic) and 1(severe_toxic) classifying the comment
- `obscene`: value between 0(non-obscene) and 1(obscene) classifying the comment
- `identity_attack`: value between 0(non-identity_hate) or 1(identity_hate) classifying the comment
- `insult`: value between 0(non-insult) or 1(insult) classifying the comment
- `threat`: value between 0(non-threat) and 1(threat) classifying the comment
- For a subset of rows, columns containing whether the comment mentions the entities (they may contain NaNs):
- `male`
- `female`
- `transgender`
- `other_gender`
- `heterosexual`
- `homosexual_gay_or_lesbian`
- `bisexual`
- `other_sexual_orientation`
- `christian`
- `jewish`
- `muslim`
- `hindu`
- `buddhist`
- `atheist`
- `other_religion`
- `black`
- `white`
- `asian`
- `latino`
- `other_race_or_ethnicity`
- `physical_disability`
- `intellectual_or_learning_disability`
- `psychiatric_or_mental_illness`
- `other_disability`
- Other metadata related to the source of the comment, such as creation date, publication id, number of likes,
number of annotators, etc:
- `created_date`
- `publication_id`
- `parent_id`
- `article_id`
- `rating`
- `funny`
- `wow`
- `sad`
- `likes`
- `disagree`
- `sexual_explicit`
- `identity_annotator_count`
- `toxicity_annotator_count`
### Data Splits
There are four splits:
- train: The train dataset as released during the competition. Contains labels and identity information for a
subset of rows.
- test: The train dataset as released during the competition. Does not contain labels nor identity information.
- test_private_expanded: The private leaderboard test set, including toxicity labels and subgroups. The competition target was a binarized version of the toxicity column, which can be easily reconstructed using a >=0.5 threshold.
- test_public_expanded: The public leaderboard test set, including toxicity labels and subgroups. The competition target was a binarized version of the toxicity column, which can be easily reconstructed using a >=0.5 threshold.
## Dataset Creation
### Curation Rationale
The dataset was created to help in efforts to identify and curb instances of toxicity online.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
This dataset is released under CC0, as is the underlying comment text.
### Citation Information
No citation is available for this dataset, though you may link to the [kaggle](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification) competition
### Contributions
Thanks to [@iwontbecreative](https://github.com/iwontbecreative) for adding this dataset.
|