|
--- |
|
license: cc-by-nc-sa-4.0 |
|
library_name: transformers |
|
pipeline_tag: token-classification |
|
--- |
|
### xlm-roberta-base for token classification, specifically fine-tuned for question-answer extraction for English |
|
|
|
This is the `xlm-roberta-base`, fine-tuned on manually annotated Finnish data and ChatGPT-annotated data. |
|
### Hyperparameters |
|
``` |
|
batch_size = 8 |
|
epochs = 10 (trained for less) |
|
base_LM_model = "xlm-roberta-base" |
|
max_seq_len = 512 |
|
learning_rate = 5e-5 |
|
``` |
|
### Performance |
|
``` |
|
Accuracy = 0.88 |
|
Question F1 = 0.77 |
|
Answer F1 = 0.81 |
|
``` |
|
|
|
### Usage |
|
|
|
Instructions on how to use the results will be added later. |