Datasets:

Languages:
English
Size:
n>1T
ArXiv:
License:

Olmo 1.7 second stage data mix

#49
by benjamin - opened

Hi!

Is it possible to release the data mix used for the second stage of Olmo 1.7 training (as mentioned here)?

Even though this is just a combination of the Dolma 1.7 data sources, having this as a separate data split would make it much more convenient to use, and would add a lot of value to the community since (as far as I know) there are currently no public high-quality second-stage-annealing datasets.

In the meantime, a couple of questions for trying to create this mix myself:

  1. Where is the table with mix weights under the heading "Staged training data and learning rate" in the blogpost sourced from? I wasn't able to find the mix weights in the Dolma or Olmo 1.7 READMEs (as a sidenote, I also assume the quantities are billions of tokens, not millions as denoted in the table).
  2. Was the Dolma data shuffled before sharding, i.e., is it enough to download a subsample of the shards (where a shard is e.g. https://olmo-data.org/dolma-v1_7/c4-filtered/c4-0007.json.gz) to subsample the dataset, or would I have to download the entire dataset, shuffle, then subsample?

Thanks a lot for any help!

Immediately after I wrote this I saw that Olmo 2 was released alongside Dolmino which is exactly what I was looking for so consider this solved!

benjamin changed discussion status to closed

Sign up or log in to comment