yairschiff
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -37,7 +37,7 @@ The model architecture is based off of the [Diffusion Transformer architecture](
|
|
37 |
|
38 |
### Training Details
|
39 |
|
40 |
-
The model was trained using the `
|
41 |
We trained for 25k gradient update steps using a batch size of 2,048.
|
42 |
We used linear warm-up with 1,000 steps until we reach a learning rate of 3e-4 and the applied cosine-decay until reaching a minimum learning rate of 3e-6.
|
43 |
|
|
|
37 |
|
38 |
### Training Details
|
39 |
|
40 |
+
The model was trained using the `yairschiff/qm9-tokenizer` tokenizer, a custom tokenizer for parsing SMILES strings.
|
41 |
We trained for 25k gradient update steps using a batch size of 2,048.
|
42 |
We used linear warm-up with 1,000 steps until we reach a learning rate of 3e-4 and the applied cosine-decay until reaching a minimum learning rate of 3e-6.
|
43 |
|