Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -400,7 +400,7 @@ configs:
|
|
400 |
|
401 |
## What is it?
|
402 |
|
403 |
-
The π· FineWeb dataset consists of more than **15T tokens** of cleaned and deduplicated english web data from CommonCrawl. The data processing pipeline is optimized for LLM performance and ran on the π `
|
404 |
|
405 |
π· FineWeb was originally meant to be a fully open replication of π¦
[RefinedWeb](https://huggingface.co/papers/2306.01116), with a release of the **full dataset** under the **ODC-By 1.0 license**. However, by carefully adding additional filtering steps, we managed to push the performance of π· FineWeb well above that of the original π¦
RefinedWeb, and models trained on our dataset also outperform models trained on other commonly used high quality web datasets (like C4, Dolma-v1.6, The Pile, SlimPajama) on our aggregate group of benchmark tasks.
|
406 |
|
@@ -408,15 +408,15 @@ That said, we think there is still room for additional filtering and improvement
|
|
408 |
|
409 |
## What is being released?
|
410 |
|
411 |
-
Along with the dataset, which includes all CommonCrawl dumps since 2013, we also share all the code needed to fully reproduce our processing setup using the
|
412 |
|
413 |
You will find details on the different processing decisions we took and some interesting explorations of deduplication methods and differences between CommonCrawl dumps in our technical report to be published in the coming days.
|
414 |
|
415 |
-
## How to download and use FineWeb
|
416 |
|
417 |
You can load the full dataset or a specific crawl/dump (see table below). Dumps have the format `CC-MAIN-(year)-(week number)`.
|
418 |
|
419 |
-
### Using `
|
420 |
|
421 |
```python
|
422 |
from datatrove.pipeline.readers import ParquetReader
|
@@ -571,7 +571,7 @@ fw = load_dataset("HuggingFaceFW/fineweb", name="CC-MAIN-2024-10", split="train"
|
|
571 |
|
572 |
## Dataset performance evaluation and ablations
|
573 |
|
574 |
-
We conducted our dataset performance ablations and evaluations by training a series of 1.8B parameters models on 27 billion tokens. To compare FineWeb with other datasets, we also trained one of these 1.8B models per target dataset, on 350 billion tokens sampled from it (or the entire dataset when its size was < 350 billion tokens).
|
575 |
|
576 |
### Hyper-parameters for ablation models
|
577 |
|
@@ -603,7 +603,7 @@ The prompts for all these benchmarks are formatted in order to compute and compa
|
|
603 |
|
604 |
### Comparison with other datasets
|
605 |
|
606 |
-
We compared FineWeb with the following datasets:
|
607 |
|
608 |
- [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
|
609 |
- [C4](https://huggingface.co/datasets/allenai/c4)
|
@@ -615,7 +615,7 @@ You will find these models on [this collection](https://huggingface.co/collectio
|
|
615 |
|
616 |
[INSERT PLOT HERE]
|
617 |
|
618 |
-
# Dataset card for FineWeb
|
619 |
|
620 |
## Dataset Description
|
621 |
|
@@ -625,7 +625,7 @@ You will find these models on [this collection](https://huggingface.co/collectio
|
|
625 |
|
626 |
### Dataset Summary
|
627 |
|
628 |
-
This dataset was created by processing 95 [CommonCrawl](https://commoncrawl.org/) dumps comprising web data crawled from the summer of 2013 to March of 2024. FineWeb includes a variety of domains and topics in English and is primarily intended to be used as a research artifact on public data in the context of pretraining dataset for large language models. The CommonCrawl data was carefully processed, filtered and deduplicated with the `
|
629 |
|
630 |
## Dataset Structure
|
631 |
|
@@ -668,7 +668,7 @@ From experiments we have run, not all dumps give the same performance. For relat
|
|
668 |
|
669 |
### Curation Rationale
|
670 |
|
671 |
-
While multiple open-weights models have regularly been released in recent months, these releases often do not include the model's training data. With FineWeb we aim to provide the open source community with a very large clean pretraining dataset that can be used to push the envelope on truly open source models (open source models where data is also released).
|
672 |
|
673 |
### Source Data
|
674 |
|
@@ -680,7 +680,7 @@ While we originally intended to deduplicate the dataset as a whole, our ablation
|
|
680 |
|
681 |
### Data processing steps
|
682 |
|
683 |
-
We used the `datatrove` library to process the data.
|
684 |
You can find a **working script** that launches the [entire processing pipeline here](https://github.com/huggingface/datatrove/blob/main/examples/fineweb.py).
|
685 |
|
686 |
The data processing pipeline consists of:
|
@@ -703,9 +703,9 @@ We augment the original samples with the `language`, `language_score` and `token
|
|
703 |
|
704 |
We anonymize email addresses and public IP addresses.
|
705 |
|
706 |
-
For emails, we apply a regex pattern and replace any occurrence of an email address with either `
|
707 |
|
708 |
-
Despite our efforts, given that FineWeb is sourced from the internet at large, it is very likely that some personable identifiable information (PII) will be present. If you find your own PII in FineWeb and would like it removed, please fill out our PII removal form (available soon).
|
709 |
|
710 |
## Considerations for Using the Data
|
711 |
|
@@ -713,17 +713,17 @@ Despite our efforts, given that FineWeb is sourced from the internet at large, i
|
|
713 |
|
714 |
With the release of this dataset we aim to make model training more accessible to the machine learning community at large.
|
715 |
|
716 |
-
While multiple open-weights models with strong performance have been publicly released in the past, more often than not these releases are not accompanied by the corresponding training dataset. This is unfortunate as the dataset specificities and characteristics have been demonstrated to have a very large impact and role in the performances of the models. As the creation of a high quality training dataset is a fundamental requirement to training an LLM capable of excelling at downstream tasks, with FineWeb we (a) not only make the dataset creation process more transparent, by sharing our entire processing setup including the codebase used, we also (b) help alleviate the costs of dataset curation, both in time and in compute, for model creators by publicly releasing our dataset with the community.
|
717 |
|
718 |
### Discussion of Biases
|
719 |
|
720 |
-
Efforts were made to minimize the amount of NSFW and toxic content present in the dataset by employing filtering on the URL level. However, there are still a significant number of documents present in the final dataset that could be considered toxic or contain harmful content. As FineWeb was sourced from the web as a whole, any harmful biases typically present in it may be reproduced on our dataset.
|
721 |
|
722 |
We deliberately avoided using machine learning filtering methods that define text quality based on the similarity to a βgoldβ source such as wikipedia or toxicity classifiers as these methods have been known to [disproportionately remove content in specific dialects](https://aclanthology.org/D16-1120/) and [overclassify as toxic text related to specific social identities](https://arxiv.org/pdf/2109.07445.pdf), respectively.
|
723 |
|
724 |
### Other Known Limitations
|
725 |
|
726 |
-
As a consequence of some of the filtering steps applied, it is likely that code content is not prevalent in our dataset. If you are training a model that should also perform code tasks, we recommend you use FineWeb with a code dataset, such as [The Stack v2](https://huggingface.co/datasets/bigcode/the-stack-v2). You should also probably consider complementing FineWeb with specialized curated sources (such as Wikipedia, for example) as they will likely have better formatting than the wikipedia content included in FineWeb (we did not tailor the processing to individual websites).
|
727 |
|
728 |
## Additional Information
|
729 |
|
@@ -733,7 +733,7 @@ The dataset is released under the **Open Data Commons Attribution License (ODC-B
|
|
733 |
|
734 |
### Future work
|
735 |
|
736 |
-
We plan to not only continue but also expand our efforts to create open-source high quality training datasets and to improve FineWeb itself in future iterations.
|
737 |
|
738 |
### Citation Information
|
739 |
|
|
|
400 |
|
401 |
## What is it?
|
402 |
|
403 |
+
The π· FineWeb dataset consists of more than **15T tokens** of cleaned and deduplicated english web data from CommonCrawl. The data processing pipeline is optimized for LLM performance and ran on the π [`datatrove`](https://github.com/huggingface/datatrove/) library, our large scale data processing library.
|
404 |
|
405 |
π· FineWeb was originally meant to be a fully open replication of π¦
[RefinedWeb](https://huggingface.co/papers/2306.01116), with a release of the **full dataset** under the **ODC-By 1.0 license**. However, by carefully adding additional filtering steps, we managed to push the performance of π· FineWeb well above that of the original π¦
RefinedWeb, and models trained on our dataset also outperform models trained on other commonly used high quality web datasets (like C4, Dolma-v1.6, The Pile, SlimPajama) on our aggregate group of benchmark tasks.
|
406 |
|
|
|
408 |
|
409 |
## What is being released?
|
410 |
|
411 |
+
Along with the dataset, which includes all CommonCrawl dumps since 2013, we also share all the code needed to fully reproduce our processing setup using the π [`datatrove`](https://github.com/huggingface/datatrove/) library [here](https://github.com/huggingface/datatrove/blob/main/examples/fineweb.py). To enable full replication of our results, we have also published the small ablation models we have trained using [`nanotron`](https://github.com/huggingface/nanotron/) to validate the dataset and compare it with other reference datasets. You will find them [here](https://huggingface.co/collections/HuggingFaceFW/ablation-models-662457b0d213e8c14fe47f32), with checkpoints every 1000 steps. We have also published our evaluation results [here](https://huggingface.co/datasets/HuggingFaceFW/fineweb/blob/main/eval_results.csv).
|
412 |
|
413 |
You will find details on the different processing decisions we took and some interesting explorations of deduplication methods and differences between CommonCrawl dumps in our technical report to be published in the coming days.
|
414 |
|
415 |
+
## How to download and use π· FineWeb
|
416 |
|
417 |
You can load the full dataset or a specific crawl/dump (see table below). Dumps have the format `CC-MAIN-(year)-(week number)`.
|
418 |
|
419 |
+
### Using π [`datatrove`](https://github.com/huggingface/datatrove/)`
|
420 |
|
421 |
```python
|
422 |
from datatrove.pipeline.readers import ParquetReader
|
|
|
571 |
|
572 |
## Dataset performance evaluation and ablations
|
573 |
|
574 |
+
We conducted our dataset performance ablations and evaluations by training a series of 1.8B parameters models on 27 billion tokens. To compare π· FineWeb with other datasets, we also trained one of these 1.8B models per target dataset, on 350 billion tokens sampled from it (or the entire dataset when its size was < 350 billion tokens).
|
575 |
|
576 |
### Hyper-parameters for ablation models
|
577 |
|
|
|
603 |
|
604 |
### Comparison with other datasets
|
605 |
|
606 |
+
We compared π· FineWeb with the following datasets:
|
607 |
|
608 |
- [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
|
609 |
- [C4](https://huggingface.co/datasets/allenai/c4)
|
|
|
615 |
|
616 |
[INSERT PLOT HERE]
|
617 |
|
618 |
+
# Dataset card for π· FineWeb
|
619 |
|
620 |
## Dataset Description
|
621 |
|
|
|
625 |
|
626 |
### Dataset Summary
|
627 |
|
628 |
+
This dataset was created by processing 95 [CommonCrawl](https://commoncrawl.org/) dumps comprising web data crawled from the summer of 2013 to March of 2024. π· FineWeb includes a variety of domains and topics in English and is primarily intended to be used as a research artifact on public data in the context of pretraining dataset for large language models. The CommonCrawl data was carefully processed, filtered and deduplicated with the π [`datatrove`](https://github.com/huggingface/datatrove/) library, resulting in the largest publicly available clean LLM pretraining dataset, counting around 15 trillion tokens (gpt2 tokenizer).
|
629 |
|
630 |
## Dataset Structure
|
631 |
|
|
|
668 |
|
669 |
### Curation Rationale
|
670 |
|
671 |
+
While multiple open-weights models have regularly been released in recent months, these releases often do not include the model's training data. With π· FineWeb we aim to provide the open source community with a very large clean pretraining dataset that can be used to push the envelope on truly open source models (open source models where data is also released).
|
672 |
|
673 |
### Source Data
|
674 |
|
|
|
680 |
|
681 |
### Data processing steps
|
682 |
|
683 |
+
We used the π `datatrove` library to process the data.
|
684 |
You can find a **working script** that launches the [entire processing pipeline here](https://github.com/huggingface/datatrove/blob/main/examples/fineweb.py).
|
685 |
|
686 |
The data processing pipeline consists of:
|
|
|
703 |
|
704 |
We anonymize email addresses and public IP addresses.
|
705 |
|
706 |
+
For emails, we apply a regex pattern and replace any occurrence of an email address with either `[email protected]` or `[email protected]`. For IP addresses, we also employ a regex pattern and then further filter to only anonymize IP addresses [allocated for public networks](https://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml). Matched IP addresses are then replaced with one of the following randomly generated IP addresses, which at the time of dataset creation were not responding to ping requests: `22.214.171.124`, `126.96.36.199`, `188.8.131.52`, `184.108.40.206`, `220.127.116.11`, and `18.104.22.168`. We decided against applying regex patterns for phone numbers due to the high false positive rate.
|
707 |
|
708 |
+
Despite our efforts, given that π· FineWeb is sourced from the internet at large, it is very likely that some personable identifiable information (PII) will be present. If you find your own PII in π· FineWeb and would like it removed, please fill out our PII removal form (available soon).
|
709 |
|
710 |
## Considerations for Using the Data
|
711 |
|
|
|
713 |
|
714 |
With the release of this dataset we aim to make model training more accessible to the machine learning community at large.
|
715 |
|
716 |
+
While multiple open-weights models with strong performance have been publicly released in the past, more often than not these releases are not accompanied by the corresponding training dataset. This is unfortunate as the dataset specificities and characteristics have been demonstrated to have a very large impact and role in the performances of the models. As the creation of a high quality training dataset is a fundamental requirement to training an LLM capable of excelling at downstream tasks, with π· FineWeb we (a) not only make the dataset creation process more transparent, by sharing our entire processing setup including the codebase used, we also (b) help alleviate the costs of dataset curation, both in time and in compute, for model creators by publicly releasing our dataset with the community.
|
717 |
|
718 |
### Discussion of Biases
|
719 |
|
720 |
+
Efforts were made to minimize the amount of NSFW and toxic content present in the dataset by employing filtering on the URL level. However, there are still a significant number of documents present in the final dataset that could be considered toxic or contain harmful content. As π· FineWeb was sourced from the web as a whole, any harmful biases typically present in it may be reproduced on our dataset.
|
721 |
|
722 |
We deliberately avoided using machine learning filtering methods that define text quality based on the similarity to a βgoldβ source such as wikipedia or toxicity classifiers as these methods have been known to [disproportionately remove content in specific dialects](https://aclanthology.org/D16-1120/) and [overclassify as toxic text related to specific social identities](https://arxiv.org/pdf/2109.07445.pdf), respectively.
|
723 |
|
724 |
### Other Known Limitations
|
725 |
|
726 |
+
As a consequence of some of the filtering steps applied, it is likely that code content is not prevalent in our dataset. If you are training a model that should also perform code tasks, we recommend you use π· FineWeb with a code dataset, such as [The Stack v2](https://huggingface.co/datasets/bigcode/the-stack-v2). You should also probably consider complementing π· FineWeb with specialized curated sources (such as Wikipedia, for example) as they will likely have better formatting than the wikipedia content included in π· FineWeb (we did not tailor the processing to individual websites).
|
727 |
|
728 |
## Additional Information
|
729 |
|
|
|
733 |
|
734 |
### Future work
|
735 |
|
736 |
+
We plan to not only continue but also expand our efforts to create open-source high quality training datasets and to improve π· FineWeb itself in future iterations.
|
737 |
|
738 |
### Citation Information
|
739 |
|