Text Generation
Transformers
Safetensors
English
olmo2
conversational
Inference Endpoints
vwxyzjn commited on
Commit
84d8c7c
1 Parent(s): 6568974

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -20,7 +20,7 @@ Upon the initial release of OLMo-2 models, we realized the post-trained models d
20
 
21
  ## Release Documentation
22
 
23
- OLMo 2 13B Instruct November 2024 is post-trained variant of the [OLMo-2 13B November 2024](https://huggingface.co/allenai/OLMo2-13B-1124) model, which has undergone supervised finetuning on an OLMo-specific variant of the [Tülu 3 dataset](allenai/tulu-3-sft-olmo-2-mixture) and further DPO training on [this dataset](https://huggingface.co/datasets/allenai/olmo-2-1124-7b-preference-mix), and finally RLVR training using [this data](https://huggingface.co/datasets/allenai/RLVR-GSM).
24
  Tülu 3 is designed for state-of-the-art performance on a diversity of tasks in addition to chat, such as MATH, GSM8K, and IFEval.
25
  Check out the [OLMo 2 paper](https://arxiv.org/abs/2501.00656) or [Tülu 3 paper](https://arxiv.org/abs/2411.15124) for more details!
26
 
@@ -44,7 +44,7 @@ The core models released in this batch include the following:
44
  - **Model type:** A model trained on a mix of publicly available, synthetic and human-created datasets.
45
  - **Language(s) (NLP):** Primarily English
46
  - **License:** Apache 2.0
47
- - **Finetuned from model:** allenai/OLMo-2-13B-1124-DPO
48
 
49
  ### Model Sources
50
 
@@ -53,7 +53,7 @@ The core models released in this batch include the following:
53
  - Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo
54
  - Evaluation code: https://github.com/allenai/olmes
55
  - Further fine-tuning code: https://github.com/allenai/open-instruct
56
- - **Paper:** Coming soon!
57
  - **Demo:** https://playground.allenai.org/
58
 
59
  ## Installation
 
20
 
21
  ## Release Documentation
22
 
23
+ OLMo 2 13B Instruct November 2024 is post-trained variant of the [OLMo-2 13B November 2024](https://huggingface.co/allenai/OLMo2-13B-1124) model, which has undergone supervised finetuning on an OLMo-specific variant of the [Tülu 3 dataset](allenai/tulu-3-sft-olmo-2-mixture) and further DPO training on [this dataset](https://huggingface.co/datasets/allenai/olmo-2-1124-13b-preference-mix), and finally RLVR training using [this data](https://huggingface.co/datasets/allenai/RLVR-GSM).
24
  Tülu 3 is designed for state-of-the-art performance on a diversity of tasks in addition to chat, such as MATH, GSM8K, and IFEval.
25
  Check out the [OLMo 2 paper](https://arxiv.org/abs/2501.00656) or [Tülu 3 paper](https://arxiv.org/abs/2411.15124) for more details!
26
 
 
44
  - **Model type:** A model trained on a mix of publicly available, synthetic and human-created datasets.
45
  - **Language(s) (NLP):** Primarily English
46
  - **License:** Apache 2.0
47
+ - **Finetuned from model:** allenai/OLMo-2-13B-1124-RLVR2
48
 
49
  ### Model Sources
50
 
 
53
  - Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo
54
  - Evaluation code: https://github.com/allenai/olmes
55
  - Further fine-tuning code: https://github.com/allenai/open-instruct
56
+ - **Paper:** https://arxiv.org/abs/2501.00656
57
  - **Demo:** https://playground.allenai.org/
58
 
59
  ## Installation