bert-arsentd-lev
Arabic version bert model fine tuned on ArSentD-LEV dataset
Data
The model were fine-tuned on ~4000 sentence from twitter multiple dialect and five classes we used 3 out of 5 int the experiment.
Results
class | precision | recall | f1-score | Support |
---|---|---|---|---|
0 | 0.8211 | 0.8080 | 0.8145 | 125 |
1 | 0.7174 | 0.7857 | 0.7500 | 84 |
2 | 0.6867 | 0.6404 | 0.6628 | 89 |
Accuracy | 0.7517 | 298 |
How to use
You can use these models by installing torch
or tensorflow
and Huggingface library transformers
. And you can use it directly by initializing it like this:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name="mofawzy/bert-arsentd-lev"
model = AutoModelForSequenceClassification.from_pretrained(model_name,num_labels=3)
tokenizer = AutoTokenizer.from_pretrained(model_name)
- Downloads last month
- 110
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.