Fill-Mask
Transformers
Safetensors
udlm
custom_code
yairschiff commited on
Commit
397b9bd
·
verified ·
1 Parent(s): ce959d9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -37,7 +37,7 @@ The model architecture is based off of the [Diffusion Transformer architecture](
37
 
38
  ### Training Details
39
 
40
- The model was trained using the `bert-base-uncased` tokenizer.
41
  We trained for 25k gradient update steps using a batch size of 2,048.
42
  We used linear warm-up with 1,000 steps until we reach a learning rate of 3e-4 and the applied cosine-decay until reaching a minimum learning rate of 3e-6.
43
 
 
37
 
38
  ### Training Details
39
 
40
+ The model was trained using the `yairschiff/qm9-tokenizer` tokenizer, a custom tokenizer for parsing SMILES strings.
41
  We trained for 25k gradient update steps using a batch size of 2,048.
42
  We used linear warm-up with 1,000 steps until we reach a learning rate of 3e-4 and the applied cosine-decay until reaching a minimum learning rate of 3e-6.
43