lhallee commited on
Commit
cc5aab6
·
verified ·
1 Parent(s): 6320a7c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -90,9 +90,9 @@ The plot below showcases performance normalized between the negative control (ra
90
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62f2bd3bdb7cbd214b658c48/TYKzcKxOjxjX5B3rg1sLc.png)
91
 
92
  ## Inference speeds
93
- We look at various ESM models and their throughput on an H100. Adding efficient batching between ESMC and ESM++ significantly improves the throughput. ESM++ small is even faster than ESM2-35M with long sequences!
94
  The most gains will be seen with PyTorch > 2.5 on linux machines.
95
-
96
 
97
  ### Citation
98
  If you use any of this implementation or work please cite it (as well as the ESMC preprint). Bibtex for both coming soon.
 
90
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62f2bd3bdb7cbd214b658c48/TYKzcKxOjxjX5B3rg1sLc.png)
91
 
92
  ## Inference speeds
93
+ We look at various ESM models and their throughput on an H100. Adding efficient batching between ESMC and ESM++ significantly improves the throughput, although ESM++ is also faster than ESMC for batch size one. ESM++ small is even faster than ESM2-35M with long sequences!
94
  The most gains will be seen with PyTorch > 2.5 on linux machines.
95
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62f2bd3bdb7cbd214b658c48/RfLRSchFivdsqJrWMh4bo.png)
96
 
97
  ### Citation
98
  If you use any of this implementation or work please cite it (as well as the ESMC preprint). Bibtex for both coming soon.