Update README.md
Browse files
README.md
CHANGED
@@ -53,13 +53,44 @@ print(ner_results)
|
|
53 |
|
54 |
The performance of distilbert-NER is linked to its training on the CoNLL-2003 dataset. Therefore, it might show limited effectiveness on text data that significantly differs from this training set. Users should be aware of potential biases inherent in the training data and the possibility of entity misclassification in complex sentences.
|
55 |
|
|
|
56 |
## Training data
|
57 |
|
58 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
59 |
|
60 |
## Training procedure
|
61 |
|
62 |
-
|
63 |
|
64 |
## Eval results
|
65 |
| Metric | Score |
|
|
|
53 |
|
54 |
The performance of distilbert-NER is linked to its training on the CoNLL-2003 dataset. Therefore, it might show limited effectiveness on text data that significantly differs from this training set. Users should be aware of potential biases inherent in the training data and the possibility of entity misclassification in complex sentences.
|
55 |
|
56 |
+
|
57 |
## Training data
|
58 |
|
59 |
+
This model was fine-tuned on English version of the standard [CoNLL-2003 Named Entity Recognition](https://www.aclweb.org/anthology/W03-0419.pdf) dataset.
|
60 |
+
|
61 |
+
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
|
62 |
+
|
63 |
+
Abbreviation|Description
|
64 |
+
-|-
|
65 |
+
O|Outside of a named entity
|
66 |
+
B-MISC |Beginning of a miscellaneous entity right after another miscellaneous entity
|
67 |
+
I-MISC | Miscellaneous entity
|
68 |
+
B-PER |Beginning of a person’s name right after another person’s name
|
69 |
+
I-PER |Person’s name
|
70 |
+
B-ORG |Beginning of an organization right after another organization
|
71 |
+
I-ORG |organization
|
72 |
+
B-LOC |Beginning of a location right after another location
|
73 |
+
I-LOC |Location
|
74 |
+
|
75 |
+
|
76 |
+
### CoNLL-2003 English Dataset Statistics
|
77 |
+
This dataset was derived from the Reuters corpus which consists of Reuters news stories. You can read more about how this dataset was created in the CoNLL-2003 paper.
|
78 |
+
#### # of training examples per entity type
|
79 |
+
Dataset|LOC|MISC|ORG|PER
|
80 |
+
-|-|-|-|-
|
81 |
+
Train|7140|3438|6321|6600
|
82 |
+
Dev|1837|922|1341|1842
|
83 |
+
Test|1668|702|1661|1617
|
84 |
+
#### # of articles/sentences/tokens per dataset
|
85 |
+
Dataset |Articles |Sentences |Tokens
|
86 |
+
-|-|-|-
|
87 |
+
Train |946 |14,987 |203,621
|
88 |
+
Dev |216 |3,466 |51,362
|
89 |
+
Test |231 |3,684 |46,435
|
90 |
|
91 |
## Training procedure
|
92 |
|
93 |
+
This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the [original BERT paper](https://arxiv.org/pdf/1810.04805) which trained & evaluated the model on CoNLL-2003 NER task.
|
94 |
|
95 |
## Eval results
|
96 |
| Metric | Score |
|