Javier Huertas
include config
9103237
|
raw
history blame
1.11 kB
---
pipeline_tag: text-classification
tags:
- natural-language-inference
- misogyny
language: en
license: apache-2.0
widget:
- text: "Las mascarillas causan hipoxia. Wearing masks is harmful to human health"
example_title: "Natural Language Inference"
---
# bertweet-base-multi-mami
This is a finetuned XLM-RoBERTA model for natural language inference. It has been trained with a massive ammount of data following the ANLI pipeline training. We include data from:
- [mnli](https://cims.nyu.edu/~sbowman/multinli/) {train, dev and test}
- [snli](https://nlp.stanford.edu/projects/snli/) {train, dev and test}
- [xnli](https://github.com/facebookresearch/XNLI) {train, dev and test}
- [fever](https://fever.ai/resources.html) {train, dev and test}
- [anli](https://github.com/facebookresearch/anli) {train}
The model is validated on ANLI training sets, including R1, R2 and R3. The following results can be expected on the testing splits.
|Split|Accuracy|
|-|-|
|R1|0.6610|
|R2|0.4990|
|R3|0.4425|
# Multilabels
label2id={
"contradiction": 0,
"entailment": 1,
"neutral": 2,
},