File size: 3,017 Bytes
7a295b6 a681a36 7a295b6 a681a36 7a295b6 a681a36 7a295b6 a681a36 7a295b6 a681a36 890b56e a681a36 890b56e a681a36 890b56e a681a36 890b56e a681a36 890b56e a681a36 890b56e a681a36 890b56e a681a36 890b56e a681a36 890b56e a681a36 890b56e a681a36 890b56e a681a36 890b56e a681a36 890b56e a681a36 890b56e a681a36 890b56e a681a36 890b56e a681a36 890b56e a681a36 890b56e a681a36 890b56e a681a36 890b56e a681a36 890b56e a681a36 890b56e a681a36 890b56e 7a295b6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
base_model: distilbert-base-uncased
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- type: accuracy
value: 0.925
name: Accuracy
- type: f1
value: 0.925169929474641
name: F1
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: test
metrics:
- type: accuracy
value: 0.9185
name: Accuracy
verified: true
- type: precision
value: 0.8812304360487162
name: Precision Macro
verified: true
- type: precision
value: 0.9185
name: Precision Micro
verified: true
- type: precision
value: 0.9186256759712246
name: Precision Weighted
verified: true
- type: recall
value: 0.8685675449036236
name: Recall Macro
verified: true
- type: recall
value: 0.9185
name: Recall Micro
verified: true
- type: recall
value: 0.9185
name: Recall Weighted
verified: true
- type: f1
value: 0.8737330835692586
name: F1 Macro
verified: true
- type: f1
value: 0.9185
name: F1 Micro
verified: true
- type: f1
value: 0.9182854700791021
name: F1 Weighted
verified: true
- type: loss
value: 0.2216690629720688
name: loss
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2202
- Accuracy: 0.925
- F1: 0.9252
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8419 | 1.0 | 250 | 0.3236 | 0.9025 | 0.8999 |
| 0.258 | 2.0 | 500 | 0.2202 | 0.925 | 0.9252 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|