|
--- |
|
license: mit |
|
base_model: PORTULAN/albertina-100m-portuguese-ptpt-encoder |
|
tags: |
|
- generated_from_trainer |
|
datasets: |
|
- harem |
|
metrics: |
|
- precision |
|
- recall |
|
- f1 |
|
- accuracy |
|
model-index: |
|
- name: NER_harem_albertina-100m-portuguese-ptpt-encoder |
|
results: |
|
- task: |
|
name: Token Classification |
|
type: token-classification |
|
dataset: |
|
name: harem |
|
type: harem |
|
config: default |
|
split: test |
|
args: default |
|
metrics: |
|
- name: Precision |
|
type: precision |
|
value: 0.67216673903604 |
|
- name: Recall |
|
type: recall |
|
value: 0.725398313027179 |
|
- name: F1 |
|
type: f1 |
|
value: 0.6977687626774848 |
|
- name: Accuracy |
|
type: accuracy |
|
value: 0.9532056132627089 |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# NER_harem_albertina-100m-portuguese-ptpt-encoder |
|
|
|
This model is a fine-tuned version of [PORTULAN/albertina-100m-portuguese-ptpt-encoder](https://huggingface.co/PORTULAN/albertina-100m-portuguese-ptpt-encoder) on the harem dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 0.2583 |
|
- Precision: 0.6722 |
|
- Recall: 0.7254 |
|
- F1: 0.6978 |
|
- Accuracy: 0.9532 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 3e-05 |
|
- train_batch_size: 8 |
|
- eval_batch_size: 8 |
|
- seed: 42 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- num_epochs: 300 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |
|
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| |
|
| No log | 1.0 | 16 | 0.5322 | 0.0212 | 0.0117 | 0.0151 | 0.8615 | |
|
| No log | 2.0 | 32 | 0.3238 | 0.4230 | 0.4981 | 0.4575 | 0.9110 | |
|
| No log | 3.0 | 48 | 0.2460 | 0.5006 | 0.6007 | 0.5461 | 0.9369 | |
|
| No log | 4.0 | 64 | 0.2240 | 0.5526 | 0.6396 | 0.5930 | 0.9414 | |
|
| No log | 5.0 | 80 | 0.2088 | 0.5498 | 0.6340 | 0.5889 | 0.9492 | |
|
| No log | 6.0 | 96 | 0.2068 | 0.5884 | 0.6645 | 0.6241 | 0.9496 | |
|
| No log | 7.0 | 112 | 0.2253 | 0.5906 | 0.6720 | 0.6287 | 0.9481 | |
|
| No log | 8.0 | 128 | 0.2115 | 0.6245 | 0.6874 | 0.6545 | 0.9516 | |
|
| No log | 9.0 | 144 | 0.2187 | 0.6546 | 0.7062 | 0.6794 | 0.9533 | |
|
| No log | 10.0 | 160 | 0.2398 | 0.6432 | 0.7020 | 0.6713 | 0.9495 | |
|
| No log | 11.0 | 176 | 0.2554 | 0.6653 | 0.7043 | 0.6843 | 0.9526 | |
|
| No log | 12.0 | 192 | 0.2397 | 0.6777 | 0.7212 | 0.6988 | 0.9529 | |
|
| No log | 13.0 | 208 | 0.2565 | 0.6778 | 0.7207 | 0.6986 | 0.9531 | |
|
| No log | 14.0 | 224 | 0.2700 | 0.6586 | 0.7142 | 0.6853 | 0.9506 | |
|
| No log | 15.0 | 240 | 0.2700 | 0.7009 | 0.7259 | 0.7132 | 0.9544 | |
|
| No log | 16.0 | 256 | 0.2688 | 0.6761 | 0.7240 | 0.6993 | 0.9532 | |
|
| No log | 17.0 | 272 | 0.2741 | 0.7132 | 0.7343 | 0.7236 | 0.9558 | |
|
| No log | 18.0 | 288 | 0.2732 | 0.6740 | 0.7132 | 0.6931 | 0.9530 | |
|
| No log | 19.0 | 304 | 0.2745 | 0.7094 | 0.7310 | 0.7201 | 0.9550 | |
|
| No log | 20.0 | 320 | 0.2583 | 0.6722 | 0.7254 | 0.6978 | 0.9532 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.38.2 |
|
- Pytorch 2.2.1+cu121 |
|
- Datasets 2.18.0 |
|
- Tokenizers 0.15.2 |
|
|