rdiehlmartinez
commited on
Update README with tokenizer info
Browse files
README.md
CHANGED
@@ -13,7 +13,7 @@ A pre-tokenized, pre-shuffled version of [Dolma](https://huggingface.co/datasets
|
|
13 |
### Overview
|
14 |
|
15 |
The Pico dataset simplifies training by providing:
|
16 |
-
- Pre-tokenized text in chunks of 2048 tokens
|
17 |
- Pre-shuffled data for consistent training
|
18 |
- Streaming-friendly format
|
19 |
- 420B tokens total (perfect for 200K steps at batch size 1024)
|
|
|
13 |
### Overview
|
14 |
|
15 |
The Pico dataset simplifies training by providing:
|
16 |
+
- Pre-tokenized text in chunks of 2048 tokens, using the [OLMo Tokenizer](https://huggingface.co/allenai/OLMo-7B-0724-hf/blob/main/tokenizer_config.json)
|
17 |
- Pre-shuffled data for consistent training
|
18 |
- Streaming-friendly format
|
19 |
- 420B tokens total (perfect for 200K steps at batch size 1024)
|