File size: 3,791 Bytes
08197c0
dee953a
 
 
08197c0
dee953a
 
 
 
 
 
 
 
08197c0
dee953a
5033109
 
5d83818
 
85cce9b
5d83818
 
 
dee953a
 
8fd67ce
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dee953a
07e90eb
 
 
 
fb41fa8
 
07e90eb
fb41fa8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
07e90eb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
---
widget:
- text: Climate change is just a natural phenomenon
- example_title: 2.1 Contrarian claim
license: mit
language:
- en
metrics:
- f1
pipeline_tag: text-classification
tags:
- climate
- misinformation
---

# Taxonomy Augmented CARDS

## Taxonomy

![Cards Taxonomy](CARDS_taxonomy_2_levels.png)

## Metrics

|      **Category** | **CARDS** | **Augmented CARDS** | **Support** |
|------------------:|----------:|--------------------:|------------:|
|             _0_0_ |      70.9 |            **81.5** |        1049 |
|             _1_1_ |      60.5 |            **70.4** |          28 |
|             _1_2_ |        40 |            **44.4** |          20 |
|             _1_3_ |        37 |            **48.6** |          61 |
|             _1_4_ |      62.1 |            **65.6** |          27 |
|             _1_6_ |      56.7 |            **59.7** |          41 |
|             _1_7_ |      46.4 |              **52** |          89 |
|             _2_1_ |      68.1 |            **69.4** |         154 |
|             _2_3_ |  **36.7** |                  25 |          22 |
|             _3_1_ |  **38.5** |                34.8 |           8 |
|             _3_2_ |        61 |            **74.6** |          31 |
|             _3_3_ |      54.2 |            **65.4** |          23 |
|             _4_1_ |      38.5 |            **49.4** |         103 |
|             _4_2_ |  **37.6** |                28.6 |          61 |
|             _4_4_ |      30.8 |            **54.5** |          46 |
|             _4_5_ |      19.7 |            **39.4** |          50 |
|             _5_1_ |      32.8 |            **38.2** |          96 |
|             _5_2_ |      38.6 |            **53.5** |         498 |
|             _5.3_ |         - |            **62.9** |         200 |
|                   |           |                     |             |
| **Macro Average** |     43.69 |           **53.57** |        2407 |

# Code

To run the model, you need to first evaluate the binary classification model, as shown below:

```python 
# Models
MAX_LEN = 256
BINARY_MODEL_DIR = "crarojasca/BinaryAugmentedCARDS"
TAXONOMY_MODEL_DIR = "crarojasca/TaxonomyAugmentedCARDS"

# Loading tokenizer
tokenizer = AutoTokenizer.from_pretrained(
    BINARY_MODEL_DIR,
    max_length = MAX_LEN, padding = "max_length", 
    return_token_type_ids = True
)

# Loading Models
## 1. Binary Model
print("Loading binary model: {}".format(BINARY_MODEL_DIR))
config = AutoConfig.from_pretrained(BINARY_MODEL_DIR)
binary_model = AutoModelForSequenceClassification.from_pretrained(BINARY_MODEL_DIR, config=config)
binary_model.to(device)

## 2. Taxonomy Model
print("Loading taxonomy model: {}".format(TAXONOMY_MODEL_DIR))
config = AutoConfig.from_pretrained(TAXONOMY_MODEL_DIR)
taxonomy_model = AutoModelForSequenceClassification.from_pretrained(TAXONOMY_MODEL_DIR, config=config)
taxonomy_model.to(device)

# Load Dataset
id2label = {
    0: '1_1', 1: '1_2', 2: '1_3', 3: '1_4', 4: '1_6', 5: '1_7', 6: '2_1', 
    7: '2_3', 8: '3_1', 9: '3_2', 10: '3_3', 11: '4_1', 12: '4_2', 13: '4_4', 
    14: '4_5', 15: '5_1', 16: '5_2', 17: '5_3'
}


text = "Climate change is just a natural phenomenon"

tokenized_text = tokenizer(text, return_tensors = "pt")


# Running Binary Model
outputs = binary_model(**tokenized_text)
binary_score = outputs.logits.softmax(dim = 1)
binary_prediction = torch.argmax(outputs.logits, axis=1)
binary_predictions = binary_prediction.to('cpu').item()

# Running Taxonomy Model
outputs = taxonomy_model(**tokenized_text)
taxonomy_score = outputs.logits.softmax(dim = 1)
taxonomy_prediction = torch.argmax(outputs.logits, axis=1)
taxonomy_prediction = taxonomy_prediction.to('cpu').item()


prediction = "0_0" if binary_prediction==0 else id2label[taxonomy_prediction]
prediction

```