metadata
license: mit
task_categories:
- text-generation
language:
- en
pretty_name: fineweb-sample-100BT-tokenized-45b
size_categories:
- 10B<n<100B
Fineweb sample-100BT tokenized with the gpt2 tokenizer. Sequences are truncated to a length of 1024 tokens.
For efficiency, tokens are stored in uint16 numpy arrays. Shards are stored in the webdataset format.