chizhikchi
commited on
Commit
·
e9ee06a
1
Parent(s):
b56e658
Update README.md
Browse files
README.md
CHANGED
@@ -35,4 +35,23 @@ The model contained in this repository constitutes the fundament of the NER syst
|
|
35 |
|
36 |
|
37 |
# System description paper and citation
|
38 |
-
The system description paper
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
|
36 |
|
37 |
# System description paper and citation
|
38 |
+
[`The system description paper`](https://aclanthology.org/2022.smm4h-1.8/) was be published at Social Media Mining for Health Application (#SMM4H) held on COLING22 in October 2022.
|
39 |
+
|
40 |
+
```
|
41 |
+
@inproceedings{chizhikova-etal-2022-sinai,
|
42 |
+
title = "{SINAI}@{SMM}4{H}{'}22: Transformers for biomedical social media text mining in {S}panish",
|
43 |
+
author = "Chizhikova, Mariia and
|
44 |
+
L{\'o}pez-{\'U}beda, Pilar and
|
45 |
+
D{\'\i}az-Galiano, Manuel C. and
|
46 |
+
Ure{\~n}a-L{\'o}pez, L. Alfonso and
|
47 |
+
Mart{\'\i}n-Valdivia, M. Teresa",
|
48 |
+
booktitle = "Proceedings of The Seventh Workshop on Social Media Mining for Health Applications, Workshop {\&} Shared Task",
|
49 |
+
month = oct,
|
50 |
+
year = "2022",
|
51 |
+
address = "Gyeongju, Republic of Korea",
|
52 |
+
publisher = "Association for Computational Linguistics",
|
53 |
+
url = "https://aclanthology.org/2022.smm4h-1.8",
|
54 |
+
pages = "27--30",
|
55 |
+
abstract = "This paper covers participation of the SINAI team in Tasks 5 and 10 of the Social Media Mining for Health ({\#}SSM4H) workshop at COLING-2022. These tasks focus on leveraging Twitter posts written in Spanish for healthcare research. The objective of Task 5 was to classify tweets reporting COVID-19 symptoms, while Task 10 required identifying disease mentions in Twitter posts. The presented systems explore large RoBERTa language models pre-trained on Twitter data in the case of tweet classification task and general-domain data for the disease recognition task. We also present a text pre-processing methodology implemented in both systems and describe an initial weakly-supervised fine-tuning phase alongside with a submission post-processing procedure designed for Task 10. The systems obtained 0.84 F1-score on the Task 5 and 0.77 F1-score on Task 10.",
|
56 |
+
}
|
57 |
+
```
|