File size: 1,112 Bytes
9103237
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---
pipeline_tag: text-classification
tags:
- natural-language-inference
- misogyny
language: en
license: apache-2.0
widget:
- text: "Las mascarillas causan hipoxia. Wearing masks is harmful to human health"
  example_title: "Natural Language Inference"
---

# bertweet-base-multi-mami
This is a finetuned XLM-RoBERTA model for natural language inference. It has been trained with a massive ammount of data following the ANLI pipeline training. We include data from:
- [mnli](https://cims.nyu.edu/~sbowman/multinli/) {train, dev and test}
- [snli](https://nlp.stanford.edu/projects/snli/) {train, dev and test}
- [xnli](https://github.com/facebookresearch/XNLI) {train, dev and test}
- [fever](https://fever.ai/resources.html) {train, dev and test}
- [anli](https://github.com/facebookresearch/anli) {train}

The model is validated on ANLI training sets, including R1, R2 and R3. The following results can be expected on the testing splits.
|Split|Accuracy|
|-|-|
|R1|0.6610|
|R2|0.4990|
|R3|0.4425|

# Multilabels
    label2id={
        "contradiction": 0,
        "entailment": 1,
        "neutral": 2,
    },