Datasets:

Formats:
parquet
Languages:
English
Libraries:
Datasets
Dask
License:
rdiehlmartinez commited on
Commit
01187aa
·
verified ·
1 Parent(s): 43ee332

Update README with tokenizer info

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -13,7 +13,7 @@ A pre-tokenized, pre-shuffled version of [Dolma](https://huggingface.co/datasets
13
  ### Overview
14
 
15
  The Pico dataset simplifies training by providing:
16
- - Pre-tokenized text in chunks of 2048 tokens
17
  - Pre-shuffled data for consistent training
18
  - Streaming-friendly format
19
  - 420B tokens total (perfect for 200K steps at batch size 1024)
 
13
  ### Overview
14
 
15
  The Pico dataset simplifies training by providing:
16
+ - Pre-tokenized text in chunks of 2048 tokens, using the [OLMo Tokenizer](https://huggingface.co/allenai/OLMo-7B-0724-hf/blob/main/tokenizer_config.json)
17
  - Pre-shuffled data for consistent training
18
  - Streaming-friendly format
19
  - 420B tokens total (perfect for 200K steps at batch size 1024)