Model Card for Model ID
This model is fine-tuned version of KLUE BERT (https://huggingface.co/klue/bert-base) for the open-domain dialogue evaluation based on the original BERT-RUBER(https://arxiv.org/pdf/1904.10635) architecture.
Model Details
The model consists of the BERT encoder for contextualized embedding and an additional multi-layer classifier. For pooling, this model uses the mean pooling.
The details can be found on the original paper: https://arxiv.org/pdf/1904.10635
Model Description
- Developed by: devjwsong
- Model type: BertModel + MLP
- Language(s) (NLP): Korean
- License: MIT
- Finetuned from model [optional]: klue/bert-base (https://huggingface.co/klue/bert-base)
Model Sources [optional]
- Repository: https://github.com/devjwsong/bert-ruber-kor-pytorch
- Paper: https://arxiv.org/pdf/1904.10635
Citation
- Ghazarian, S., Wei, J. T. Z., Galstyan, A., & Peng, N. (2019). Better automatic evaluation of open-domain dialogue systems with contextualized embeddings. arXiv preprint arXiv:1904.10635.
- Park, S., Moon, J., Kim, S., Cho, W. I., Han, J., Park, J., ... & Cho, K. (2021). Klue: Korean language understanding evaluation. arXiv preprint arXiv:2105.09680.
Model Card Authors
Jaewoo (Kyle) Song
Model Card Contact
- Downloads last month
- 115
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.