ECBERT-base-mlm
This model is a fine-tuned version of Graimond/ECBERT-base-mlm on Gorodnichenko, Y., Pham, T., & Talavera, O. (2023). Data and Code for: The Voice of Monetary Policy (Version v1) [Dataset]. ICPSR - Interuniversity Consortium for Political and Social Research. https://doi.org/10.3886/E178302V1. The best model achieves the following results on the evaluation set:
- Loss: 0.4129
- Accuracy: 85.94%
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
Gorodnichenko, Y., Pham, T., & Talavera, O. (2023). Data and Code for: The Voice of Monetary Policy (Version v1) [Dataset]. ICPSR - Interuniversity Consortium for Political and Social Research. https://doi.org/10.3886/E178302V1.
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-5
- weight_decay=0.01
- per_device_train_batch_size=16
- seed: 42
- epochs: 20
Training results
Epoch | Training Loss | Validation Loss |
---|---|---|
1 | No log | 0.886533 |
2 | No log | 0.514593 |
3 | No log | 0.437099 |
4 | 0.683200 | 0.420006 |
5 | 0.683200 | 0.453126 |
6 | 0.683200 | 0.412876 |
7 | 0.262900 | 0.621511 |
8 | 0.262900 | 0.527209 |
9 | 0.262900 | 0.673689 |
10 | 0.191300 | 0.711371 |
11 | 0.191300 | 0.578193 |
12 | 0.191300 | 0.854842 |
13 | 0.141100 | 0.809792 |
14 | 0.141100 | 0.847027 |
15 | 0.141100 | 0.847365 |
16 | 0.085900 | 0.846864 |
17 | 0.085900 | 0.880487 |
18 | 0.085900 | 0.870781 |
19 | 0.085900 | 0.868764 |
20 | 0.076000 | 0.871563 |
Framework versions
- Transformers 4.48.0.dev0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 4