--- license: apache-2.0 configs: - config_name: default data_files: - split: train path: - data/CC-MAIN-2014-23/* - data/CC-MAIN-2014-35/* - data/CC-MAIN-2014-41/* - data/CC-MAIN-2014-42/* - data/CC-MAIN-2014-49/* - data/CC-MAIN-2014-52/* - data/CC-MAIN-2015-06/* - data/CC-MAIN-2015-11/* - data/CC-MAIN-2015-14/* - data/CC-MAIN-2015-18/* - data/CC-MAIN-2015-22/* - data/CC-MAIN-2015-32/* - data/CC-MAIN-2015-35/* - data/CC-MAIN-2015-40/* - data/CC-MAIN-2016-07/* - data/CC-MAIN-2016-18/* - data/CC-MAIN-2016-22/* - data/CC-MAIN-2016-30/* - data/CC-MAIN-2016-36/* - data/CC-MAIN-2016-44/* - data/CC-MAIN-2016-50/* - data/CC-MAIN-2017-04/* - data/CC-MAIN-2017-09/* - data/CC-MAIN-2017-13/* - data/CC-MAIN-2017-22/* - data/CC-MAIN-2017-26/* - data/CC-MAIN-2017-30/* - config_name: CC-MAIN-2013-20 data_files: - split: train path: data/CC-MAIN-2013-20/* - config_name: CC-MAIN-2013-48 data_files: - split: train path: data/CC-MAIN-2013-48/* - config_name: CC-MAIN-2014-10 data_files: - split: train path: data/CC-MAIN-2014-10/* - config_name: CC-MAIN-2014-15 data_files: - split: train path: data/CC-MAIN-2014-15/* - config_name: CC-MAIN-2014-23 data_files: - split: train path: data/CC-MAIN-2014-23/* - config_name: CC-MAIN-2014-35 data_files: - split: train path: data/CC-MAIN-2014-35/* - config_name: CC-MAIN-2014-41 data_files: - split: train path: data/CC-MAIN-2014-41/* - config_name: CC-MAIN-2014-42 data_files: - split: train path: data/CC-MAIN-2014-42/* - config_name: CC-MAIN-2014-49 data_files: - split: train path: data/CC-MAIN-2014-49/* - config_name: CC-MAIN-2014-52 data_files: - split: train path: data/CC-MAIN-2014-52/* - config_name: CC-MAIN-2015-06 data_files: - split: train path: data/CC-MAIN-2015-06/* - config_name: CC-MAIN-2015-11 data_files: - split: train path: data/CC-MAIN-2015-11/* - config_name: CC-MAIN-2015-14 data_files: - split: train path: data/CC-MAIN-2015-14/* - config_name: CC-MAIN-2015-18 data_files: - split: train path: data/CC-MAIN-2015-18/* - config_name: CC-MAIN-2015-22 data_files: - split: train path: data/CC-MAIN-2015-22/* - config_name: CC-MAIN-2015-32 data_files: - split: train path: data/CC-MAIN-2015-32/* - config_name: CC-MAIN-2015-35 data_files: - split: train path: data/CC-MAIN-2015-35/* - config_name: CC-MAIN-2015-40 data_files: - split: train path: data/CC-MAIN-2015-40/* - config_name: CC-MAIN-2016-44 data_files: - split: train path: data/CC-MAIN-2016-44/* - config_name: CC-MAIN-2016-07 data_files: - split: train path: data/CC-MAIN-2016-07/* - config_name: CC-MAIN-2016-18 data_files: - split: train path: data/CC-MAIN-2016-18/* - config_name: CC-MAIN-2016-22 data_files: - split: train path: data/CC-MAIN-2016-22/* - config_name: CC-MAIN-2016-30 data_files: - split: train path: data/CC-MAIN-2016-30/* - config_name: CC-MAIN-2016-36 data_files: - split: train path: data/CC-MAIN-2016-36/* - config_name: CC-MAIN-2016-44 data_files: - split: train path: data/CC-MAIN-2016-44/* - config_name: CC-MAIN-2016-50 data_files: - split: train path: data/CC-MAIN-2016-50/* - config_name: CC-MAIN-2017-04 data_files: - split: train path: data/CC-MAIN-2017-04/* - config_name: CC-MAIN-2017-09 data_files: - split: train path: data/CC-MAIN-2017-09/* - config_name: CC-MAIN-2017-13 data_files: - split: train path: data/CC-MAIN-2017-13/* - config_name: CC-MAIN-2017-22 data_files: - split: train path: data/CC-MAIN-2017-22/* - config_name: CC-MAIN-2017-26 data_files: - split: train path: data/CC-MAIN-2017-26/* - config_name: CC-MAIN-2017-30 data_files: - split: train path: data/CC-MAIN-2017-30/* --- # Fineweb-edu-hindi ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/64a70b5db5e5c56860a6309a/GfgrnlU7IK6UgVIrZAouz.webp) Fineweb-edu-hindi is a synthetic dataset generated by translating the [Fineweb-edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) to Hindi Language using [IndicTrans2](https://github.com/AI4Bharat/IndicTrans2). The model variant used is [IndicTrans2-en-indic-dist-200M](https://huggingface.co/ai4bharat/indictrans2-en-indic-dist-200M). It contains about 300 Billion tokens in the Gemma-2-2b Tokenizer. # Hardware Resources: The Google Cloud TPUs and the Google Cloud Platform was utilized for the dataset creation process. # Code: Github: [fineweb-translation](https://github.com/kathir-ks/fineweb-translation) # Contact: If any queries or issues, reach out to: [Kathir](mailto:kathirksw@gmail.com)