English-Hungarian Qualtiy Estimation with finetuned XLM-RoBERTa model
For further models, scripts and details, see our demo site.
- Pretrained model used: XLM-RoBERTa base
- Finetuned on HuQ corpus
- Labels: regression model
- Separator: <sep>
Limitations
- max_seq_length = 256
- input format: {src_en_sentence} <sep> {tgt_hu_sentence}
Results
10-fold cross validation on HuQ corpus
Model | Correlation | MAE | RMSE |
---|---|---|---|
baseline | 0.6100 | 0.7459 | 0.9775 |
XLM-R | 0.7948 | 0.6451 | 0.8898 |
Citation
If you use this model, please cite the following paper:
@article{yang-rl,
title = {Enhancing Machine Translation with Quality Estimation and Reinforcement Learning},
journal = {Annales Mathematicae et Informaticae},
year = {2023},
author = {Yang, Zijian Győző and Laki, László János},
pages = {Accepted}
}
- Downloads last month
- 101
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.