results
This model is a fine-tuned version of BAAI/bge-small-en-v1.5 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.2684
- Accuracy: 0.8979
- Precision: 0.9260
- Precision Per Class: [0.693325661680092, 0.993795521985433]
- Recall: 0.8979
- Recall Per Class: [0.9812703583061889, 0.8736068294996443]
- F1: 0.9034
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Precision Per Class | Recall | Recall Per Class | F1 |
---|---|---|---|---|---|---|---|---|---|
0.2734 | 1.0 | 307 | 0.2684 | 0.8979 | 0.9260 | [0.693325661680092, 0.993795521985433] | 0.8979 | [0.9812703583061889, 0.8736068294996443] | 0.9034 |
0.2102 | 2.0 | 614 | 0.2506 | 0.8896 | 0.9227 | [0.6750418760469011, 0.99480021893815] | 0.8896 | [0.9845276872964169, 0.861987194688167] | 0.8960 |
0.1805 | 3.0 | 921 | 0.4546 | 0.8426 | 0.9050 | [0.5899175957343674, 0.9967474866942637] | 0.8426 | [0.991042345276873, 0.7993834479487788] | 0.8539 |
0.1068 | 4.0 | 1228 | 0.5444 | 0.8465 | 0.9056 | [0.5963618485742379, 0.9956024626209323] | 0.8465 | [0.987785016286645, 0.8053118330566753] | 0.8573 |
0.1227 | 5.0 | 1535 | 0.5583 | 0.8485 | 0.9067 | [0.5994079921065614, 0.9961966062024575] | 0.8485 | [0.989413680781759, 0.8074460516955181] | 0.8592 |
0.0663 | 6.0 | 1842 | 0.7334 | 0.8373 | 0.9027 | [0.5818965517241379, 0.9961274947870122] | 0.8373 | [0.989413680781759, 0.7929807920322505] | 0.8491 |
0.0342 | 7.0 | 2149 | 0.8487 | 0.8323 | 0.9002 | [0.574750830564784, 0.9949071300179748] | 0.8323 | [0.9861563517915309, 0.7875266777329856] | 0.8447 |
0.0832 | 8.0 | 2456 | 0.7713 | 0.8397 | 0.9029 | [0.5857902368293861, 0.995260663507109] | 0.8397 | [0.9869706840390879, 0.7967749585013042] | 0.8512 |
0.0308 | 9.0 | 2763 | 0.8480 | 0.8432 | 0.9040 | [0.59130859375, 0.9949955843391227] | 0.8432 | [0.9861563517915309, 0.8015176665876216] | 0.8543 |
0.0115 | 10.0 | 3070 | 0.8434 | 0.8503 | 0.9062 | [0.6029925187032419, 0.9944767441860465] | 0.8503 | [0.9845276872964169, 0.8112402181645719] | 0.8607 |
Framework versions
- Transformers 4.42.2
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.19.1
- Downloads last month
- 2
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.
Model tree for fuhakiem/results
Base model
BAAI/bge-small-en-v1.5