File size: 3,140 Bytes
76bc1a0 dccb75b f876788 dccb75b f876788 6ffb4f7 dccb75b f876788 dccb75b b81fcce dccb75b dea46ed c569ad6 180d862 dea46ed 8d2e7e0 177563c dea46ed 8d2e7e0 069c524 dea46ed 8d2e7e0 069c524 dea46ed 8d2e7e0 177563c 180d862 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
---
dataset_info:
features:
- name: text
dtype: string
- name: id
dtype: string
- name: domain
dtype: string
splits:
- name: train
num_bytes: 65506190827
num_examples: 12169131
download_size: 34648619492
dataset_size: 65506190827
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
### Dataset Description
Vietnamese Curated Text Dataset. This dataset is collected from multiple open Vietnamese datasets, and curated with [NeMo Curator](https://github.com/NVIDIA/NeMo-Curator)
- **Developed by:** Viettel Solutions
- **Language:** Vietnamese
### Details
Please visit our Tech Blog post on NVIDIA's plog page for details. [Link](https://developer.nvidia.com/blog/processing-high-quality-vietnamese-language-data-with-nvidia-nemo-curator/)
#### Data Collection
We utilize a combination of datasets that contain samples in Vietnamese language, ensuring a robust and representative text corpus. These datasets include:
- The Vietnamese subset of the [C4 dataset](https://huggingface.co/datasets/allenai/c4/viewer/vi) .
- The Vietnamese subset of the [OSCAR dataset, version 23.01](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301/tree/main/vi_meta).
- [Wikipedia's Vietnamese articles](https://huggingface.co/datasets/wikimedia/wikipedia/viewer/20231101.vi).
- [Binhvq's Vietnamese news corpus](https://huggingface.co/datasets/jetaudio/binhvq_news).
#### Preprocessing
We use [NeMo Curator](https://github.com/NVIDIA/NeMo-Curator) to curate the collected data. The data curation pipeline includes these key steps:
1. Unicode Reformatting: Texts are standardized into a consistent Unicode format to avoid encoding issues.
2. Exact Deduplication: Removes exact duplicates to reduce redundancy.
3. Quality Filtering:
4. Heuristic Filtering: Applies rules-based filters to remove low-quality content.
5. Classifier-Based Filtering: Uses machine learning to classify and filter documents based on quality.
**[Notebook](https://github.com/NVIDIA/NeMo-Curator/blob/main/tutorials/pretraining-vietnamese-data-curation/pretraining-vietnamese-data-curation.ipynb)**
#### Dataset Statistics
**Content diversity**
<img src="https://cdn-uploads.huggingface.co/production/uploads/661766c00c68b375f3f0ccc3/mW6Pct3uyP_XDdGmE8EP3.png" alt="Domain proportion in curated dataset" width="500"/>
**Character based metrics**
<img src="https://cdn-uploads.huggingface.co/production/uploads/661766c00c68b375f3f0ccc3/W9TQjM2vcC7uXozyERHSQ.png" alt="Box plots of percentage of symbols, numbers, and whitespace characters compared to the total characters, word counts and average word lengths" width="900"/>
**Token count distribution**
<img src="https://cdn-uploads.huggingface.co/production/uploads/661766c00c68b375f3f0ccc3/PDelYpBI0DefSmQgFONgE.png" alt="Distribution of document sizes (in terms of token count)" width="500"/>
**Embedding visualization**
<img src="https://cdn-uploads.huggingface.co/production/uploads/661766c00c68b375f3f0ccc3/sfeoZWuQ7DcSpbmUOJ12r.png" alt="UMAP visualization of 5% of the dataset" width="650"/>
*UMAP visualization of 5% of the dataset*
|