Datasets:

Modalities:
Text
Formats:
parquet
Languages:
French
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 2,223 Bytes
8916b30
6851eb9
 
8916b30
 
 
6851eb9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8916b30
 
 
 
 
 
 
 
 
 
bfd7857
 
d95d605
bfd7857
 
 
 
 
 
 
 
 
 
 
 
 
8916b30
bfd7857
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8916b30
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
---
language:
- fr
license: cc-by-4.0
task_categories:
- token-classification
dataset_info:
  features:
  - name: ner_tags
    sequence: int64
  - name: tokens
    sequence: string
  - name: pos_tags
    sequence: string
  splits:
  - name: train
    num_bytes: 17859073
    num_examples: 26754
  download_size: 3480973
  dataset_size: 17859073
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
# WikiNER-fr-gold

This dataset is a manually revised version of 20% of the French proportion of [WikiNER](https://doi.org/10.1016/j.artint.2012.03.006). 
The original dataset is currently available [here](https://figshare.com/articles/dataset/Learning_multilingual_named_entity_recognition_from_Wikipedia/5462500), based on which WikiNER-fr-gold is created.
The entities are annotated using the BIOES scheme.
The POS tags are not revised i.e. remain the same as the original dataset.

For more information on the revision details, please refer to our paper [WikiNER-fr-gold: A Gold-Standard NER Corpus](https://arxiv.org/abs/2411.00030).

The dataset is available in two formats. 
The CoNLL version contains three columns: text, POS and NER. 
The Parquet version is downloadable using the `datasets` library.
Originally conceived as a test set, there is no recommended train/dev/test split.
The downloaded dataset is by default labeled `train`.

```python
from datasets import load_dataset

ds = load_dataset('danrun/WikiNER-fr-gold')

ds['train'][0]
# {'ner_tags': [...], 'tokens': [...], 'pos_tags': [...]}
```

The NER tags are indexed using the following table (see `labels.json`):

```
{
 'O': 0,
 'B-PER': 1,
 'I-PER': 2,
 'E-PER': 3,
 'S-PER': 4,
 'B-LOC': 5,
 'I-LOC': 6,
 'E-LOC': 7,
 'S-LOC': 8,
 'B-ORG': 9,
 'I-ORG': 10,
 'E-ORG': 11,
 'S-ORG': 12,
 'B-MISC': 13,
 'I-MISC': 14,
 'E-MISC': 15,
 'S-MISC': 16
}
```



## Citation

```
@misc{cao2024wikinerfrgoldgoldstandardnercorpus,
      title={WikiNER-fr-gold: A Gold-Standard NER Corpus}, 
      author={Danrun Cao and Nicolas Béchet and Pierre-François Marteau},
      year={2024},
      eprint={2411.00030},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2411.00030}, 
}
```