claudios commited on
Commit
467bb81
·
verified ·
1 Parent(s): f9a388c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -0
README.md CHANGED
@@ -13,13 +13,20 @@ tags:
13
  This model is the unofficial HuggingFace version of "[CuBERT](https://github.com/google-research/google-research/tree/master/cubert)". In particular, this version comes from [gs://cubert/20210711_Python/pre_trained_model_epochs_2__length_512](https://console.cloud.google.com/storage/browser/cubert/20210711_Python/pre_trained_model_epochs_2__length_512). It was trained 2021-07-11 for 2 epochs with a 512 token context window on the Python BigQuery dataset. I manually converted the Tensorflow checkpoint to PyTorch and have uploaded it here. The [tokenizer](https://github.com/google-research/google-research/blob/master/cubert/python_tokenizer.py) has not been converted yet. All credit goes to Aditya Kanade, Petros Maniatis, Gogul Balakrishnan, and Kensen Shi.
14
 
15
  The other versions are available here:
 
16
  [cubert-20210711-Python-512](https://huggingface.co/claudios/cubert-20210711-Python-512/)
 
17
  [cubert-20210711-Python-1024](https://huggingface.co/claudios/cubert-20210711-Python-1024/)
 
18
  [cubert-20210711-Python-2048](https://huggingface.co/claudios/cubert-20210711-Python-2048/)
 
19
  [cubert-20210711-Java-512](https://huggingface.co/claudios/cubert-20210711-Java-512/)
 
20
  [cubert-20210711-Java-1024](https://huggingface.co/claudios/cubert-20210711-Java-1024/)
 
21
  [cubert-20210711-Java-2048](https://huggingface.co/claudios/cubert-20210711-Java-2048/)
22
 
 
23
  Citation:
24
  ```bibtex
25
  @inproceedings{cubert,
 
13
  This model is the unofficial HuggingFace version of "[CuBERT](https://github.com/google-research/google-research/tree/master/cubert)". In particular, this version comes from [gs://cubert/20210711_Python/pre_trained_model_epochs_2__length_512](https://console.cloud.google.com/storage/browser/cubert/20210711_Python/pre_trained_model_epochs_2__length_512). It was trained 2021-07-11 for 2 epochs with a 512 token context window on the Python BigQuery dataset. I manually converted the Tensorflow checkpoint to PyTorch and have uploaded it here. The [tokenizer](https://github.com/google-research/google-research/blob/master/cubert/python_tokenizer.py) has not been converted yet. All credit goes to Aditya Kanade, Petros Maniatis, Gogul Balakrishnan, and Kensen Shi.
14
 
15
  The other versions are available here:
16
+
17
  [cubert-20210711-Python-512](https://huggingface.co/claudios/cubert-20210711-Python-512/)
18
+
19
  [cubert-20210711-Python-1024](https://huggingface.co/claudios/cubert-20210711-Python-1024/)
20
+
21
  [cubert-20210711-Python-2048](https://huggingface.co/claudios/cubert-20210711-Python-2048/)
22
+
23
  [cubert-20210711-Java-512](https://huggingface.co/claudios/cubert-20210711-Java-512/)
24
+
25
  [cubert-20210711-Java-1024](https://huggingface.co/claudios/cubert-20210711-Java-1024/)
26
+
27
  [cubert-20210711-Java-2048](https://huggingface.co/claudios/cubert-20210711-Java-2048/)
28
 
29
+
30
  Citation:
31
  ```bibtex
32
  @inproceedings{cubert,