Datasets:

Modalities:
Text
Languages:
English
ArXiv:
Libraries:
Datasets
License:
dipteshkanojia commited on
Commit
cc93bfb
·
1 Parent(s): 37e4746
Files changed (1) hide show
  1. README.md +122 -1
README.md CHANGED
@@ -1,4 +1,23 @@
1
- <p align="center"><img src="https://huggingface.co/datasets/surrey-nlp/PLOD-unfiltered/blob/main/imgs/plod.png" alt="logo" width="50" height="84"/></p>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
  # PLOD: An Abbreviation Detection Dataset
4
 
@@ -12,6 +31,108 @@ We provide two variants of our dataset - Filtered and Unfiltered. They are descr
12
 
13
  2. The Unfiltered version can be accessed via [Huggingface Datasets here](https://huggingface.co/datasets/surrey-nlp/PLOD-unfiltered) and a [CONLL format is present here](https://github.com/surrey-nlp/PLOD-AbbreviationDetection).<br/>
14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  ### Installation
16
 
17
  We use the custom NER pipeline in the [spaCy transformers](https://spacy.io/universe/project/spacy-transformers) library to train our models. This library supports training via any pre-trained language models available at the :rocket: [HuggingFace repository](https://huggingface.co/).<br/>
 
1
+ annotations_creators:
2
+ - Leonardo Zilio, Hadeel Saadany, Prashant Sharma, Diptesh Kanojia, Constantin Orasan
3
+ language_creators:
4
+ - found
5
+ languages:
6
+ - en
7
+ licenses:
8
+ - cc-by-sa4.0
9
+ multilinguality:
10
+ - monolingual
11
+ paperswithcode_id: acronym-identification
12
+ pretty_name: 'PLOD: An Abbreviation Detection Dataset'
13
+ size_categories:
14
+ - 100K<n<1M
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - token-classification
19
+ task_ids:
20
+ - named-entity-recognition
21
 
22
  # PLOD: An Abbreviation Detection Dataset
23
 
 
31
 
32
  2. The Unfiltered version can be accessed via [Huggingface Datasets here](https://huggingface.co/datasets/surrey-nlp/PLOD-unfiltered) and a [CONLL format is present here](https://github.com/surrey-nlp/PLOD-AbbreviationDetection).<br/>
33
 
34
+ # Dataset Card for PLOD-unfiltered
35
+
36
+ ## Table of Contents
37
+ - [Dataset Description](#dataset-description)
38
+ - [Dataset Summary](#dataset-summary)
39
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
40
+ - [Languages](#languages)
41
+ - [Dataset Structure](#dataset-structure)
42
+ - [Data Instances](#data-instances)
43
+ - [Data Fields](#data-instances)
44
+ - [Data Splits](#data-instances)
45
+ - [Dataset Creation](#dataset-creation)
46
+ - [Curation Rationale](#curation-rationale)
47
+ - [Source Data](#source-data)
48
+ - [Annotations](#annotations)
49
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
50
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
51
+ - [Social Impact of Dataset](#social-impact-of-dataset)
52
+ - [Discussion of Biases](#discussion-of-biases)
53
+ - [Other Known Limitations](#other-known-limitations)
54
+ - [Additional Information](#additional-information)
55
+ - [Dataset Curators](#dataset-curators)
56
+ - [Licensing Information](#licensing-information)
57
+ - [Citation Information](#citation-information)
58
+
59
+ ## Dataset Description
60
+
61
+ - **Homepage:** [Needs More Information]
62
+ - **Repository:** https://github.com/surrey-nlp/PLOD-AbbreviationDetection
63
+ - **Paper:** XX
64
+ - **Leaderboard:** YY
65
+ - **Point of Contact:** [Diptesh Kanojia](mailto:[email protected])
66
+
67
+ ### Dataset Summary
68
+
69
+ This PLOD Dataset is an English-language dataset of abbreviations and their long-forms tagged in text. The dataset has been collected for research from the PLOS journals indexing of abbreviations and long-forms in the text. This dataset was created to support the Natural Language Processing task of abbreviation detection and covers the scientific domain.
70
+
71
+ ### Supported Tasks and Leaderboards
72
+
73
+ This dataset primarily supports the Abbreviation Detection Task. It has also been tested on a train+dev split provided by the Acronym Detection Shared Task organized as a part of the Scientific Document Understanding (SDU) workshop at AAAI 2022.
74
+
75
+
76
+ ### Languages
77
+
78
+ English
79
+
80
+ ## Dataset Structure
81
+
82
+ ### Data Instances
83
+
84
+ A typical data point comprises an ID, a set of `tokens` present in the text, a set of `pos_tags` for the corresponding tokens obtained via Spacy NER, and a set of `ner_tags` which are limited to `AC` for `Acronym` and `LF` for `long-forms`.
85
+
86
+ An example from the dataset:
87
+ {'id': '1',
88
+ 'tokens': ['Study', '-', 'specific', 'risk', 'ratios', '(', 'RRs', ')', 'and', 'mean', 'BW', 'differences', 'were', 'calculated', 'using', 'linear', 'and', 'log', '-', 'binomial', 'regression', 'models', 'controlling', 'for', 'confounding', 'using', 'inverse', 'probability', 'of', 'treatment', 'weights', '(', 'IPTW', ')', 'truncated', 'at', 'the', '1st', 'and', '99th', 'percentiles', '.'],
89
+ 'pos_tags': [8, 13, 0, 8, 8, 13, 12, 13, 5, 0, 12, 8, 3, 16, 16, 0, 5, 0, 13, 0, 8, 8, 16, 1, 8, 16, 0, 8, 1, 8, 8, 13, 12, 13, 16, 1, 6, 0, 5, 0, 8, 13],
90
+ 'ner_tags': [0, 0, 0, 3, 4, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 4, 4, 4, 4, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0]
91
+ }
92
+
93
+ ### Data Fields
94
+
95
+ - id: the row identifier for the dataset point.
96
+ - tokens: The tokens contained in the text.
97
+ - pos_tags: the Part-of-Speech tags obtained for the corresponding token above from Spacy NER.
98
+ - ner_tags: The tags for abbreviations and long-forms.
99
+
100
+
101
+ ### Data Splits
102
+
103
+ | | Train | Valid | Test |
104
+ | ----- | ------ | ----- | ---- |
105
+ | Filtered | 112652 | 24140 | 24140|
106
+ | Unfiltered | 113860 | 24399 | 24399|
107
+
108
+
109
+ ## Dataset Creation
110
+
111
+ ### Source Data
112
+
113
+ #### Initial Data Collection and Normalization
114
+
115
+ Extracting the data from PLOS Journals online and then tokenization, normalization.
116
+
117
+ #### Who are the source language producers?
118
+
119
+ PLOS Journal
120
+
121
+ ## Additional Information
122
+
123
+ ### Dataset Curators
124
+
125
+ The dataset was initially created by Leonardo Zilio, Hadeel Saadany, Prashant Sharma,
126
+ Diptesh Kanojia, Constantin Orasan.
127
+
128
+ ### Licensing Information
129
+
130
+ CC-BY-SA 4.0
131
+
132
+ ### Citation Information
133
+
134
+ [Needs More Information]
135
+
136
  ### Installation
137
 
138
  We use the custom NER pipeline in the [spaCy transformers](https://spacy.io/universe/project/spacy-transformers) library to train our models. This library supports training via any pre-trained language models available at the :rocket: [HuggingFace repository](https://huggingface.co/).<br/>