:books: add first dataset documentation
Browse files
README.md
CHANGED
@@ -1,6 +1,23 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
# Dataset Card Creation Guide
|
@@ -37,165 +54,107 @@ YAML tags:
|
|
37 |
|
38 |
## Dataset Description
|
39 |
|
40 |
-
- **
|
41 |
-
- **
|
42 |
-
- **Paper:** [If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)]()
|
43 |
-
- **Leaderboard:** [If the dataset supports an active leaderboard, add link here]()
|
44 |
-
- **Point of Contact:** [If known, name and email of at least one person the reader can contact for questions about the dataset.]()
|
45 |
|
46 |
### Dataset Summary
|
47 |
|
48 |
-
Wikitext-fr language modeling dataset consists of over 70 million tokens extracted from the set of french Wikipedia articles that are classified as "quality articles" or "good articles
|
49 |
[Pointer Sentinel Mixture Models](https://arxiv.org/abs/1609.07843) The dataset is available under the [Creative Commons Attribution-ShareAlike License](https://creativecommons.org/licenses/by-sa/4.0/)
|
50 |
|
51 |
### Supported Tasks and Leaderboards
|
52 |
|
53 |
-
|
54 |
-
|
55 |
-
- `task-category-tag`: The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a *high/low* [metric name](https://huggingface.co/metrics/metric_name). The ([model name](https://huggingface.co/model_name) or [model class](https://huggingface.co/transformers/model_doc/model_class.html)) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard which can be found at [leaderboard url]() and ranks models based on [metric name](https://huggingface.co/metrics/metric_name) while also reporting [other metric name](https://huggingface.co/metrics/other_metric_name).
|
56 |
|
57 |
### Languages
|
58 |
|
59 |
-
|
60 |
-
|
61 |
-
When relevant, please provide [BCP-47 codes](https://tools.ietf.org/html/bcp47), which consist of a [primary language subtag](https://tools.ietf.org/html/bcp47#section-2.2.1), with a [script subtag](https://tools.ietf.org/html/bcp47#section-2.2.3) and/or [region subtag](https://tools.ietf.org/html/bcp47#section-2.2.4) if available.
|
62 |
|
63 |
## Dataset Structure
|
64 |
|
65 |
### Data Instances
|
66 |
|
67 |
-
|
68 |
|
69 |
```
|
70 |
{
|
71 |
-
'
|
72 |
...
|
73 |
}
|
74 |
```
|
75 |
|
76 |
-
Provide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit.
|
77 |
|
78 |
### Data Fields
|
79 |
|
80 |
-
|
81 |
-
|
82 |
-
- `example_field`: description of `example_field`
|
83 |
-
|
84 |
-
Note that the descriptions can be initialized with the **Show Markdown Data Fields** output of the [tagging app](https://github.com/huggingface/datasets-tagging), you will then only need to refine the generated descriptions.
|
85 |
|
86 |
### Data Splits
|
87 |
|
88 |
-
|
89 |
|
90 |
-
|
|
|
|
|
|
|
|
|
|
|
91 |
|
92 |
-
Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example:
|
93 |
-
|
94 |
-
| | Tain | Valid | Test |
|
95 |
-
| ----- | ------ | ----- | ---- |
|
96 |
-
| Input Sentences | | | |
|
97 |
-
| Average Sentence Length | | | |
|
98 |
|
99 |
## Dataset Creation
|
100 |
|
101 |
### Curation Rationale
|
102 |
|
103 |
-
|
104 |
|
105 |
### Source Data
|
106 |
|
107 |
-
|
|
|
108 |
|
109 |
#### Initial Data Collection and Normalization
|
110 |
|
111 |
-
|
112 |
-
|
113 |
-
If data was collected from other pre-existing datasets, link to source here and to their [Hugging Face version](https://huggingface.co/datasets/dataset_name).
|
114 |
-
|
115 |
-
If the data was modified or normalized after being collected (e.g. if the data is word-tokenized), describe the process and the tools used.
|
116 |
-
|
117 |
-
#### Who are the source language producers?
|
118 |
-
|
119 |
-
State whether the data was produced by humans or machine generated. Describe the people or systems who originally created the data.
|
120 |
-
|
121 |
-
If available, include self-reported demographic or identity information for the source data creators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender.
|
122 |
-
|
123 |
-
Describe the conditions under which the data was created (for example, if the producers were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
|
124 |
-
|
125 |
-
Describe other people represented or mentioned in the data. Where possible, link to references for the information.
|
126 |
-
|
127 |
-
### Annotations
|
128 |
-
|
129 |
-
If the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs.
|
130 |
-
|
131 |
-
#### Annotation process
|
132 |
-
|
133 |
-
If applicable, describe the annotation process and any tools used, or state otherwise. Describe the amount of data annotated, if not all. Describe or reference annotation guidelines provided to the annotators. If available, provide interannotator statistics. Describe any annotation validation processes.
|
134 |
|
135 |
-
#### Who are the annotators?
|
136 |
-
|
137 |
-
If annotations were collected for the source data (such as class labels or syntactic parses), state whether the annotations were produced by humans or machine generated.
|
138 |
-
|
139 |
-
Describe the people or systems who originally created the annotations and their selection criteria if applicable.
|
140 |
-
|
141 |
-
If available, include self-reported demographic or identity information for the annotators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender.
|
142 |
-
|
143 |
-
Describe the conditions under which the data was annotated (for example, if the annotators were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
|
144 |
|
145 |
### Personal and Sensitive Information
|
146 |
|
147 |
-
State whether the dataset uses identity categories and, if so, how the information is used. Describe where this information comes from (i.e. self-reporting, collecting from profiles, inferring, etc.). See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender. State whether the data is linked to individuals and whether those individuals can be identified in the dataset, either directly or indirectly (i.e., in combination with other data).
|
148 |
-
|
149 |
-
State whether the dataset contains other data that might be considered sensitive (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history).
|
150 |
-
|
151 |
-
If efforts were made to anonymize the data, describe the anonymization process.
|
152 |
-
|
153 |
## Considerations for Using the Data
|
154 |
|
155 |
### Social Impact of Dataset
|
156 |
|
157 |
-
Please discuss some of the ways you believe the use of this dataset will impact society.
|
158 |
-
|
159 |
-
The statement should include both positive outlooks, such as outlining how technologies developed through its use may improve people's lives, and discuss the accompanying risks. These risks may range from making important decisions more opaque to people who are affected by the technology, to reinforcing existing harmful biases (whose specifics should be discussed in the next section), among other considerations.
|
160 |
-
|
161 |
-
Also describe in this section if the proposed dataset contains a low-resource or under-represented language. If this is the case or if this task has any impact on underserved communities, please elaborate here.
|
162 |
-
|
163 |
### Discussion of Biases
|
164 |
|
165 |
-
Provide descriptions of specific biases that are likely to be reflected in the data, and state whether any steps were taken to reduce their impact.
|
166 |
-
|
167 |
-
For Wikipedia text, see for example [Dinan et al 2020 on biases in Wikipedia (esp. Table 1)](https://arxiv.org/abs/2005.00614), or [Blodgett et al 2020](https://www.aclweb.org/anthology/2020.acl-main.485/) for a more general discussion of the topic.
|
168 |
-
|
169 |
-
If analyses have been run quantifying these biases, please add brief summaries and links to the studies here.
|
170 |
-
|
171 |
### Other Known Limitations
|
172 |
|
173 |
-
If studies of the datasets have outlined other limitations of the dataset, such as annotation artifacts, please outline and cite them here.
|
174 |
-
|
175 |
## Additional Information
|
176 |
|
177 |
### Dataset Curators
|
178 |
|
179 |
-
List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.
|
180 |
-
|
181 |
### Licensing Information
|
182 |
|
183 |
-
|
184 |
|
185 |
### Citation Information
|
186 |
|
187 |
-
Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
|
188 |
```
|
189 |
-
@
|
190 |
-
|
191 |
-
|
192 |
-
|
193 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
194 |
}
|
195 |
```
|
196 |
|
197 |
-
If the dataset has a [DOI](https://www.doi.org/), please provide it here.
|
198 |
|
199 |
### Contributions
|
200 |
|
201 |
-
Thanks to [@
|
|
|
1 |
---
|
2 |
+
annotations_creators:
|
3 |
+
- no-annotation
|
4 |
+
language_creators:
|
5 |
+
- found
|
6 |
+
languages:
|
7 |
+
- fr-FR
|
8 |
+
licenses:
|
9 |
+
- cc-by-sa-4.0
|
10 |
+
multilinguality:
|
11 |
+
- monolingual
|
12 |
+
pretty_name: Wikitext-fr
|
13 |
+
size_categories:
|
14 |
+
- unknown
|
15 |
+
source_datasets:
|
16 |
+
- original
|
17 |
+
task_categories:
|
18 |
+
- sequence-modeling
|
19 |
+
task_ids:
|
20 |
+
- language-modeling
|
21 |
---
|
22 |
|
23 |
# Dataset Card Creation Guide
|
|
|
54 |
|
55 |
## Dataset Description
|
56 |
|
57 |
+
- **Repository:** [https://github.com/AntoineSimoulin/gpt-fr](https://github.com/AntoineSimoulin/gpt-fr)
|
58 |
+
- **Paper:** [https://aclanthology.org/2021.jeptalnrecital-taln.24.pdf](https://aclanthology.org/2021.jeptalnrecital-taln.24.pdf)
|
|
|
|
|
|
|
59 |
|
60 |
### Dataset Summary
|
61 |
|
62 |
+
Wikitext-fr language modeling dataset consists of over 70 million tokens extracted from the set of french Wikipedia articles that are classified as "quality articles" or "good articles". It is designed to mirror the english benchmark from Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016.
|
63 |
[Pointer Sentinel Mixture Models](https://arxiv.org/abs/1609.07843) The dataset is available under the [Creative Commons Attribution-ShareAlike License](https://creativecommons.org/licenses/by-sa/4.0/)
|
64 |
|
65 |
### Supported Tasks and Leaderboards
|
66 |
|
67 |
+
- `language-modeling`: The dataset can be used to evaluate the generation abilites of a model. Success on this task is typically measured by achieving a *low* perplexity. The ([model name](https://huggingface.co/asi/gpt-fr-cased-base) currently achieves 12.9.
|
|
|
|
|
68 |
|
69 |
### Languages
|
70 |
|
71 |
+
The dataset is in French.
|
|
|
|
|
72 |
|
73 |
## Dataset Structure
|
74 |
|
75 |
### Data Instances
|
76 |
|
77 |
+
The dataset consists in the agregation of paragraphs from wikipedia articles.
|
78 |
|
79 |
```
|
80 |
{
|
81 |
+
'paragraph': ...,
|
82 |
...
|
83 |
}
|
84 |
```
|
85 |
|
|
|
86 |
|
87 |
### Data Fields
|
88 |
|
89 |
+
- `paragraph`: This is a paragraph from the original wikipedia article.
|
|
|
|
|
|
|
|
|
90 |
|
91 |
### Data Splits
|
92 |
|
93 |
+
The dataset is splited into a train/valid/test split.
|
94 |
|
95 |
+
| | Tain (35) | Train (72) | Valid | Test |
|
96 |
+
| ----- | ------ | ----- | ---- | ---- |
|
97 |
+
| Number of Documents | 2 126 | 5 902 | 60 | 60 |
|
98 |
+
| Number of tokens | 351 66 | 72 961 | 896 | 897 |
|
99 |
+
| Vocabulary size | 137 589 | 205 403 | | |
|
100 |
+
| Out of Vocabulary | 0.8% | 1.2% | | |
|
101 |
|
|
|
|
|
|
|
|
|
|
|
|
|
102 |
|
103 |
## Dataset Creation
|
104 |
|
105 |
### Curation Rationale
|
106 |
|
107 |
+
The dataset is created to evaluate French models with similart criteria than English.s
|
108 |
|
109 |
### Source Data
|
110 |
|
111 |
+
Wikitext-fr language modeling dataset consists of over 70 million tokens extracted from the set of french Wikipedia articles that are classified as "quality articles" or "good articles".
|
112 |
+
We did not apply specific pre-treatments as transformers models might use a dedicated tokenization.s
|
113 |
|
114 |
#### Initial Data Collection and Normalization
|
115 |
|
116 |
+
We used the Wikipedia API to collect the articles since cleaning Wikipedia articles from dumps is not a trivial task.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
117 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
118 |
|
119 |
### Personal and Sensitive Information
|
120 |
|
|
|
|
|
|
|
|
|
|
|
|
|
121 |
## Considerations for Using the Data
|
122 |
|
123 |
### Social Impact of Dataset
|
124 |
|
|
|
|
|
|
|
|
|
|
|
|
|
125 |
### Discussion of Biases
|
126 |
|
|
|
|
|
|
|
|
|
|
|
|
|
127 |
### Other Known Limitations
|
128 |
|
|
|
|
|
129 |
## Additional Information
|
130 |
|
131 |
### Dataset Curators
|
132 |
|
|
|
|
|
133 |
### Licensing Information
|
134 |
|
135 |
+
The dataset is available under the [Creative Commons Attribution-ShareAlike License](https://creativecommons.org/licenses/by-sa/4.0/)
|
136 |
|
137 |
### Citation Information
|
138 |
|
|
|
139 |
```
|
140 |
+
@inproceedings{simoulin:hal-03265900,
|
141 |
+
TITLE = {{Un mod{\`e}le Transformer G{\'e}n{\'e}ratif Pr{\'e}-entrain{\'e} pour le \_\_\_\_\_\_ fran{\c c}ais}},
|
142 |
+
AUTHOR = {Simoulin, Antoine and Crabb{\'e}, Benoit},
|
143 |
+
URL = {https://hal.archives-ouvertes.fr/hal-03265900},
|
144 |
+
BOOKTITLE = {{Traitement Automatique des Langues Naturelles}},
|
145 |
+
ADDRESS = {Lille, France},
|
146 |
+
EDITOR = {Denis, Pascal and Grabar, Natalia and Fraisse, Amel and Cardon, R{\'e}mi and Jacquemin, Bernard and Kergosien, Eric and Balvet, Antonio},
|
147 |
+
PUBLISHER = {{ATALA}},
|
148 |
+
PAGES = {246-255},
|
149 |
+
YEAR = {2021},
|
150 |
+
KEYWORDS = {fran{\c c}ais. ; GPT ; G{\'e}n{\'e}ratif ; Transformer ; Pr{\'e}-entra{\^i}n{\'e}},
|
151 |
+
PDF = {https://hal.archives-ouvertes.fr/hal-03265900/file/7.pdf},
|
152 |
+
HAL_ID = {hal-03265900},
|
153 |
+
HAL_VERSION = {v1},
|
154 |
}
|
155 |
```
|
156 |
|
|
|
157 |
|
158 |
### Contributions
|
159 |
|
160 |
+
Thanks to [@AntoineSimoulin](https://github.com/AntoineSimoulin) for adding this dataset.
|