CoT-MAE base uncased
CoT-MAE is a transformers based Mask Auto-Encoder pretraining architecture designed for Dense Passage Retrieval. CoT-MAE base uncased is a general pre-training language model trained with unsupervised MS-Marco corpus.
Details can be found in our paper and codes.
Paper: ConTextual Mask Auto-Encoder for Dense Passage Retrieval.
Code: caskcsg/ir/cotmae
Citations
If you find our work useful, please cite our paper.
@misc{https://doi.org/10.48550/arxiv.2208.07670,
doi = {10.48550/ARXIV.2208.07670},
url = {https://arxiv.org/abs/2208.07670},
author = {Wu, Xing and Ma, Guangyuan and Lin, Meng and Lin, Zijia and Wang, Zhongyuan and Hu, Songlin},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {ConTextual Mask Auto-Encoder for Dense Passage Retrieval},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
- Downloads last month
- 19
Inference API (serverless) does not yet support transformers models for this pipeline type.