yury-zyphra commited on
Commit
0f13d89
·
verified ·
1 Parent(s): 0bc994d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -1
README.md CHANGED
@@ -12,6 +12,10 @@ configs:
12
  data_files:
13
  - split: train
14
  path: data/*/*/*
 
 
 
 
15
  - config_name: dclm_crossdeduped
16
  data_files:
17
  - split: train
@@ -51,10 +55,12 @@ According to our evaluations, Zyda-2 is the most performant per-token open datas
51
  For more information, please see our [technical blog](https://www.zyphra.com/post/building-zyda-2).
52
 
53
  ## How to download
54
- Since we preserved the schemas of original component datasets, attempting to download the whole dataset using `datasets.load_dataset()` might fail during the stage of generating a split.
55
 
56
  To download the whole dataset we recommend to either clone the repository, or, if you must use the `datasets.load_dataset()`, download individual components separately.
57
 
 
 
58
  Example command to clone the repository using huggingface-cli: `huggingface-cli download Zyphra/Zyda-2 --repo-type dataset`
59
 
60
  Commands to download individual components:
@@ -66,6 +72,12 @@ Commands to download individual components:
66
  In this repository we provide raw results of cross deduplication and filtering. To achieve the best possible performance, one will need to use appropriate weights during training.
67
  We found the following optimal weights (in the sense of weights in the resultant dataset): DCLM - 4.0, FWE3 - 4.0, Zyda - 0.16, Dolma-CC - 0.24.
68
 
 
 
 
 
 
 
69
 
70
  ## Breakdown by component
71
 
 
12
  data_files:
13
  - split: train
14
  path: data/*/*/*
15
+ - config_name: sample-100BT
16
+ data_files:
17
+ - split: train
18
+ path: sample/100BT/*/*
19
  - config_name: dclm_crossdeduped
20
  data_files:
21
  - split: train
 
55
  For more information, please see our [technical blog](https://www.zyphra.com/post/building-zyda-2).
56
 
57
  ## How to download
58
+ We preserved the schemas of original component datasets, meaning that every component has its own schema. For that reason attempting to download the whole dataset using `datasets.load_dataset()` will fail during the stage of generating a split. If you attempt to stream the default config, it will also fail.
59
 
60
  To download the whole dataset we recommend to either clone the repository, or, if you must use the `datasets.load_dataset()`, download individual components separately.
61
 
62
+ Only `nemo_id` and `text` are common columns between the components. Select those for every component first, and only then concatenate the datasets.
63
+
64
  Example command to clone the repository using huggingface-cli: `huggingface-cli download Zyphra/Zyda-2 --repo-type dataset`
65
 
66
  Commands to download individual components:
 
72
  In this repository we provide raw results of cross deduplication and filtering. To achieve the best possible performance, one will need to use appropriate weights during training.
73
  We found the following optimal weights (in the sense of weights in the resultant dataset): DCLM - 4.0, FWE3 - 4.0, Zyda - 0.16, Dolma-CC - 0.24.
74
 
75
+ ### (Smaller) sample versions
76
+ Along with the configs above dump, you can also download a smaller version of the dataset with the following config:
77
+ - `sample-100BT`: a subset randomly sampled from the whole dataset of around 100B gpt-neox tokens (252GB)
78
+
79
+ This sample only has common columns `nemo-id` and `text`. In addition, it was sampled according to optimal weights, so you can start using it directly.
80
+
81
 
82
  ## Breakdown by component
83