mc4-pt-cleaned / README.md
rdemorais's picture
Update README.md
f8eb2f1
|
raw
history blame
1.75 kB
---
license: apache-2.0
task_categories:
- fill-mask
- text-generation
language:
- pt
size_categories:
- 10M<n<100M
---
## Description
This is a clenned version of AllenAI mC4 PtBR section. The original dataset can be found here https://huggingface.co/datasets/allenai/c4
## Clean procedure
We applied the same clenning procedure as explained here: https://gitlab.com/yhavinga/c4nlpreproc.git
The repository offers two strategies. The first one, found in the main.py file, uses pyspark to create a dataframe that can both clean the text and create a
pseudo mix on the entire dataset. We found this strategy clever, but it is time/resource-consuming.
To overcome this we jumped into the second approach consisting in leverage the singlefile.py script and parallel all together.
We did the following:
```
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/allenai/c4
cd c4
git lfs pull --include "multilingual/c4-pt.*.json.gz"
ls c4-nl* | parallel --gnu --jobs 96 --progress python ~/c4nlpreproc/singlefile.py {}
```
Be advice you should install parallel first if you want to reproduce this dataset, or to create another in a different language.
## Dataset Structure
We kept the same structure as the original, so it is like this:
```
{
'timestamp': '2020-02-22T22:24:31Z',
'url': 'https://url here',
'text': 'the content'
}
```
## Considerations for Using the Data
We do not perform any procedure to remove bad words, vulgarity, or profanity. it must be considered that model trained on this scraped corpus will inevitably reflect biases present in blog articles and comments on the Internet. This makes the corpus especially interesting in the context of studying data biases and how to limit their impacts.