cardiffnlp/roberta-large-tweet-topic-single-all
This model is a fine-tuned version of roberta-large on the tweet_topic_single. This model is fine-tuned on train_all
split and validated on test_2021
split of tweet_topic.
Fine-tuning script can be found here. It achieves the following results on the test_2021 set:
- F1 (micro): 0.896042528056704
- F1 (macro): 0.8000614127334341
- Accuracy: 0.896042528056704
Usage
from transformers import pipeline
pipe = pipeline("text-classification", "cardiffnlp/roberta-large-tweet-topic-single-all")
topic = pipe("Love to take night time bike rides at the jersey shore. Seaside Heights boardwalk. Beautiful weather. Wishing everyone a safe Labor Day weekend in the US.")
print(topic)
Reference
@inproceedings{dimosthenis-etal-2022-twitter,
title = "{T}witter {T}opic {C}lassification",
author = "Antypas, Dimosthenis and
Ushio, Asahi and
Camacho-Collados, Jose and
Neves, Leonardo and
Silva, Vitor and
Barbieri, Francesco",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics"
}
- Downloads last month
- 975
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Dataset used to train cardiffnlp/roberta-large-tweet-topic-single-all
Evaluation results
- F1 on cardiffnlp/tweet_topic_singleself-reported0.896
- F1 (macro) on cardiffnlp/tweet_topic_singleself-reported0.800
- Accuracy on cardiffnlp/tweet_topic_singleself-reported0.896