lhallee commited on
Commit
895e3ee
·
verified ·
1 Parent(s): 57db187

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -44,7 +44,7 @@ model = AutoModelForMaskedLM.from_pretrained('Synthyra/ESMplusplus_small', trust
44
 
45
  ### Comparison across floating-point precision and implementations
46
  We measured the difference of the last hidden states of the fp32 weights vs. fp16 or bf16. We find that the fp16 is closer to the fp32 outputs, so we recommend loading in fp16.
47
- Please note that Evolutionary Scale loads ESMC in bf16 by default, which has its share of advantages and disadvantages in inference / training - so load whichever you like for half precision.
48
 
49
  Average MSE FP32 vs. FP16: 0.00000003
50
 
 
44
 
45
  ### Comparison across floating-point precision and implementations
46
  We measured the difference of the last hidden states of the fp32 weights vs. fp16 or bf16. We find that the fp16 is closer to the fp32 outputs, so we recommend loading in fp16.
47
+ Please note that the ESM package also loads ESMC in fp32 but casts to bf16 by default, which has its share of advantages and disadvantages in inference / training - so load whichever you like for half precision.
48
 
49
  Average MSE FP32 vs. FP16: 0.00000003
50