Triangle104 commited on
Commit
6dee90e
1 Parent(s): e6ac4d0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +200 -0
README.md CHANGED
@@ -15,6 +15,206 @@ tags:
15
  This model was converted to GGUF format from [`allenai/OLMo-2-1124-7B`](https://huggingface.co/allenai/OLMo-2-1124-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
16
  Refer to the [original model card](https://huggingface.co/allenai/OLMo-2-1124-7B) for more details on the model.
17
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  ## Use with llama.cpp
19
  Install llama.cpp through brew (works on Mac and Linux)
20
 
 
15
  This model was converted to GGUF format from [`allenai/OLMo-2-1124-7B`](https://huggingface.co/allenai/OLMo-2-1124-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
16
  Refer to the [original model card](https://huggingface.co/allenai/OLMo-2-1124-7B) for more details on the model.
17
 
18
+ ---
19
+ Model details:
20
+ -
21
+ We introduce OLMo 2, a new family of 7B and 13B models featuring a
22
+ 9-point increase in MMLU, among other evaluation improvements, compared
23
+ to the original OLMo 7B model. These gains come from training on OLMo-mix-1124 and Dolmino-mix-1124 datasets and staged training approach.
24
+
25
+ OLMo is a series of Open Language Models
26
+ designed to enable the science of language models.
27
+ These models are trained on the Dolma dataset. We are releasing all
28
+ code, checkpoints, logs (coming soon), and associated training details.
29
+
30
+ Installation
31
+ -
32
+ OLMo 2 will be supported in the next version of Transformers, and you need to install it from the main branch using:
33
+
34
+
35
+ pip install --upgrade git+https://github.com/huggingface/transformers.git
36
+
37
+ Inference
38
+
39
+
40
+
41
+
42
+ You can use OLMo with the standard HuggingFace transformers library:
43
+
44
+ from transformers import AutoModelForCausalLM, AutoTokenizer
45
+ olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-1124-7B")
46
+ tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-2-1124-7B")
47
+ message = ["Language modeling is "]
48
+ inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)
49
+ # optional verifying cuda
50
+ # inputs = {k: v.to('cuda') for k,v in inputs.items()}
51
+ # olmo = olmo.to('cuda')
52
+ response = olmo.generate(**inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
53
+ print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])
54
+ >> 'Language modeling is a key component of any text-based application, but its effectiveness...'
55
+
56
+ For faster performance, you can quantize the model using the following method:
57
+
58
+ AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-1124-7B",
59
+ torch_dtype=torch.float16,
60
+ load_in_8bit=True) # Requires bitsandbytes
61
+
62
+ The quantized model is more sensitive to data
63
+ types and CUDA operations. To avoid potential issues, it's recommended
64
+ to pass the inputs directly to CUDA using:
65
+
66
+ inputs.input_ids.to('cuda')
67
+
68
+ We have released checkpoints for these models. For pretraining, the naming convention is stepXXX-tokensYYYB. For checkpoints with ingredients of the soup, the naming convention is stage2-ingredientN-stepXXX-tokensYYYB
69
+
70
+ To load a specific model revision with HuggingFace, simply add the argument revision:
71
+
72
+ olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-1124-7B", revision="step1000-tokens5B")
73
+
74
+ Or, you can access all the revisions for the models via the following code snippet:
75
+
76
+ from huggingface_hub import list_repo_refs
77
+ out = list_repo_refs("allenai/OLMo-2-1124-7B")
78
+ branches = [b.name for b in out.branches]
79
+
80
+ Fine-tuning
81
+
82
+
83
+
84
+
85
+ Model fine-tuning can be done from the final checkpoint (the main revision of this model) or many intermediate checkpoints. Two recipes for tuning are available.
86
+
87
+ Fine-tune with the OLMo repository:
88
+
89
+ torchrun --nproc_per_node=8 scripts/train.py {path_to_train_config} \
90
+ --data.paths=[{path_to_data}/input_ids.npy] \
91
+ --data.label_mask_paths=[{path_to_data}/label_mask.npy] \
92
+ --load_path={path_to_checkpoint} \
93
+ --reset_trainer_state
94
+
95
+ For more documentation, see the GitHub readme.
96
+
97
+ Further fine-tuning support is being developing in AI2's Open Instruct repository. Details are here.
98
+
99
+ Model Description
100
+
101
+
102
+
103
+
104
+ Developed by: Allen Institute for AI (Ai2)
105
+ Model type: a Transformer style autoregressive language model.
106
+ Language(s) (NLP): English
107
+ License: The code and model are released under Apache 2.0.
108
+ Contact: Technical inquiries: [email protected]. Press: [email protected]
109
+ Date cutoff: Dec. 2023.
110
+
111
+ Model Sources
112
+
113
+
114
+
115
+
116
+ Project Page: https://allenai.org/olmo
117
+ Repositories:
118
+ Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo
119
+ Evaluation code: https://github.com/allenai/OLMo-Eval
120
+ Further fine-tuning code: https://github.com/allenai/open-instruct
121
+
122
+ Paper: Coming soon
123
+
124
+ Pretraining
125
+
126
+
127
+
128
+
129
+
130
+
131
+
132
+
133
+ OLMo 2 7B
134
+ OLMo 2 13B
135
+
136
+ Pretraining Stage 1
137
+ (OLMo-Mix-1124)
138
+ 4 trillion tokens
139
+ (1 epoch)
140
+ 5 trillion tokens
141
+ (1.2 epochs)
142
+
143
+ Pretraining Stage 2
144
+ (Dolmino-Mix-1124)
145
+ 50B tokens (3 runs)
146
+ merged
147
+ 100B tokens (3 runs)
148
+ 300B tokens (1 run)
149
+ merged
150
+
151
+ Post-training
152
+ (Tulu 3 SFT OLMo mix)
153
+ SFT + DPO + PPO
154
+ (preference mix)
155
+ SFT + DPO + PPO
156
+ (preference mix)
157
+
158
+ Stage 1: Initial Pretraining
159
+
160
+
161
+
162
+
163
+ Dataset: OLMo-Mix-1124 (3.9T tokens)
164
+ Coverage: 90%+ of total pretraining budget
165
+ 7B Model: ~1 epoch
166
+ 13B Model: 1.2 epochs (5T tokens)
167
+
168
+ Stage 2: Fine-tuning
169
+
170
+
171
+
172
+
173
+ Dataset: Dolmino-Mix-1124 (843B tokens)
174
+ Three training mixes:
175
+ 50B tokens
176
+ 100B tokens
177
+ 300B tokens
178
+
179
+
180
+ Mix composition: 50% high-quality data + academic/Q&A/instruction/math content
181
+
182
+ Model Merging
183
+
184
+
185
+
186
+
187
+ 7B Model: 3 versions trained on 50B mix, merged via model souping
188
+ 13B Model: 3 versions on 100B mix + 1 version on 300B mix, merged for final checkpoint
189
+
190
+ Bias, Risks, and Limitations
191
+
192
+
193
+
194
+
195
+ Like any base language model or fine-tuned model without safety
196
+ filtering, these models can easily be prompted by users to generate
197
+ harmful and sensitive content. Such content may also be produced
198
+ unintentionally, especially in cases involving bias, so we recommend
199
+ that users consider the risks when applying this technology.
200
+ Additionally, many statements from OLMo or any LLM are often inaccurate,
201
+ so facts should be verified.
202
+
203
+ Citation
204
+
205
+
206
+
207
+
208
+ A technical manuscript is forthcoming!
209
+
210
+ Model Card Contact
211
+
212
+
213
+
214
+
215
+ For errors in this model card, contact [email protected].
216
+
217
+ ---
218
  ## Use with llama.cpp
219
  Install llama.cpp through brew (works on Mac and Linux)
220