Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,77 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- it
|
5 |
+
widget:
|
6 |
+
- text: "una fantastica giornata di #calcio! grande prestazione del mister e della squadra"
|
7 |
+
example_title: "Example 1"
|
8 |
+
- text: "il governo dovrebbe fare politica, non soltanto propaganda! #vergogna"
|
9 |
+
example_title: "Example 2"
|
10 |
+
- text: "che serata da sogno sul #redcarpet! grazie a tutti gli attori e registi del cinema italiano #oscar #awards"
|
11 |
+
example_title: "Example 3"
|
12 |
---
|
13 |
+
|
14 |
+
--------------------------------------------------------------------------------------------------
|
15 |
+
|
16 |
+
<body>
|
17 |
+
<span class="vertical-text" style="background-color:lightgreen;border-radius: 3px;padding: 3px;"> </span>
|
18 |
+
<br>
|
19 |
+
<span class="vertical-text" style="background-color:orange;border-radius: 3px;padding: 3px;"> Task: Sentiment Analysis</span>
|
20 |
+
<br>
|
21 |
+
<span class="vertical-text" style="background-color:lightblue;border-radius: 3px;padding: 3px;"> Model: BERT-TWEET</span>
|
22 |
+
<br>
|
23 |
+
<span class="vertical-text" style="background-color:tomato;border-radius: 3px;padding: 3px;"> Lang: IT</span>
|
24 |
+
<br>
|
25 |
+
<span class="vertical-text" style="background-color:lightgrey;border-radius: 3px;padding: 3px;"> </span>
|
26 |
+
<br>
|
27 |
+
<span class="vertical-text" style="background-color:#CF9FFF;border-radius: 3px;padding: 3px;"> </span>
|
28 |
+
</body>
|
29 |
+
|
30 |
+
--------------------------------------------------------------------------------------------------
|
31 |
+
|
32 |
+
<h3>Model description</h3>
|
33 |
+
|
34 |
+
This is a <b>BERT</b> <b>[1]</b> uncased model for the <b>Italian</b> language, fine-tuned for Sentiment Analysis (<b>positive</b> and <b>negative</b> classes only) on the [SENTIPOLC-16](https://www.evalita.it/campaigns/evalita-2016/tasks-challenge/sentipolc/) dataset, using <b>BERT-TWEET-ITALIAN</b> ([bert-tweet-base-italian-uncased](https://huggingface.co/osiria/bert-tweet-base-italian-uncased)) as a pre-trained model.
|
35 |
+
|
36 |
+
<h3>Training and Performances</h3>
|
37 |
+
|
38 |
+
The model is trained to perform binary sentiment classification (<b>positive</b> vs <b>negative</b>) and it's meant to be used primarily on tweets or other social media posts. It has been fine-tuned for Sentiment Analysis, using the SENTIPOLC-16 dataset, for 3 epochs with a constant learning rate of 1e-5 and exploiting class weighting to compensate for the class imbalance.
|
39 |
+
Instances having both positive and negative sentiment have been excluded, resulting in 4154 training instances and 1050 test instances
|
40 |
+
|
41 |
+
The performances on the test set are reported in the following table:
|
42 |
+
|
43 |
+
| Accuracy | Recall | Precision | F1 |
|
44 |
+
| ------ | ------ | ------ | ------ |
|
45 |
+
| 83.67 | 83.15 | 80.48 | 81.49 |
|
46 |
+
|
47 |
+
The Recall, Precision and F1 metrics are averaged over the two classes
|
48 |
+
|
49 |
+
<h3>Quick usage</h3>
|
50 |
+
|
51 |
+
```python
|
52 |
+
from transformers import BertTokenizerFast, BertForSequenceClassification
|
53 |
+
|
54 |
+
tokenizer = BertTokenizerFast.from_pretrained("osiria/bert-tweet-italian-uncased-sentiment")
|
55 |
+
model = BertForSequenceClassification.from_pretrained("osiria/bert-tweet-italian-uncased-sentiment")
|
56 |
+
|
57 |
+
from transformers import pipeline
|
58 |
+
classifier = pipeline("text-classification", model = model, tokenizer = tokenizer)
|
59 |
+
|
60 |
+
classifier("una fantastica giornata di #calcio! grande prestazione del mister e della squadra")
|
61 |
+
|
62 |
+
# [{'label': 'POSITIVE', 'score': 0.9883694648742676}]
|
63 |
+
```
|
64 |
+
|
65 |
+
<h3>References</h3>
|
66 |
+
|
67 |
+
[1] https://arxiv.org/abs/1810.04805
|
68 |
+
|
69 |
+
<h3>Limitations</h3>
|
70 |
+
|
71 |
+
This model was trained on tweets, so it's mainly suitable for general-purpose social media text processing, involving short texts written in a social network style.
|
72 |
+
It might show limitations when it comes to longer and more structured text, or domain-specific text.
|
73 |
+
|
74 |
+
<h3>License</h3>
|
75 |
+
|
76 |
+
The model is released under <b>Apache-2.0</b> license
|
77 |
+
|