DifeiT's picture
End of training
7011be3
|
raw
history blame
4.94 kB
metadata
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: rsna-intracranial-hemorrhage-detection
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.6151724137931035

rsna-intracranial-hemorrhage-detection

This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2164
  • Accuracy: 0.6152

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.5655 1.0 238 1.5235 0.4039
1.3848 2.0 477 1.3622 0.4692
1.2812 3.0 716 1.2811 0.5150
1.2039 4.0 955 1.1795 0.5556
1.1641 5.0 1193 1.1627 0.5534
1.1961 6.0 1432 1.1393 0.5705
1.1382 7.0 1671 1.0921 0.5804
0.9653 8.0 1910 1.0790 0.5876
0.9346 9.0 2148 1.0727 0.5931
0.9083 10.0 2387 1.0605 0.5994
0.8936 11.0 2626 1.0147 0.6146
0.8504 12.0 2865 1.0849 0.5818
0.8544 13.0 3103 1.0349 0.6052
0.7884 14.0 3342 1.0435 0.6074
0.7974 15.0 3581 1.0082 0.6127
0.7921 16.0 3820 1.0438 0.6017
0.709 17.0 4058 1.0484 0.6094
0.6646 18.0 4297 1.0554 0.6221
0.6832 19.0 4536 1.0455 0.6124
0.7076 20.0 4775 1.0905 0.6
0.7442 21.0 5013 1.1094 0.6008
0.6332 22.0 5252 1.0777 0.6063
0.6417 23.0 5491 1.0765 0.6141
0.6267 24.0 5730 1.1057 0.6091
0.6082 25.0 5968 1.0962 0.6171
0.6191 26.0 6207 1.1178 0.6039
0.5654 27.0 6446 1.1386 0.5948
0.5776 28.0 6685 1.1121 0.6105
0.5531 29.0 6923 1.1497 0.6030
0.6275 30.0 7162 1.1796 0.6028
0.5373 31.0 7401 1.1306 0.6132
0.4775 32.0 7640 1.1523 0.6058
0.5469 33.0 7878 1.1634 0.6127
0.4934 34.0 8117 1.1853 0.616
0.5233 35.0 8356 1.2018 0.6055
0.4896 36.0 8595 1.1585 0.6108
0.5122 37.0 8833 1.1874 0.6146
0.4726 38.0 9072 1.1608 0.6193
0.4372 39.0 9311 1.2403 0.6132
0.498 40.0 9550 1.1752 0.6201
0.4813 41.0 9788 1.2005 0.6166
0.4762 42.0 10027 1.2285 0.6022
0.4852 43.0 10266 1.2192 0.6119
0.4332 44.0 10505 1.2391 0.6218
0.3998 45.0 10743 1.1779 0.6196
0.4467 46.0 10982 1.2048 0.6284
0.4332 47.0 11221 1.2302 0.6188
0.4529 48.0 11460 1.2220 0.6188
0.4281 49.0 11698 1.2013 0.624
0.4199 49.84 11900 1.2164 0.6152

Framework versions

  • Transformers 4.33.2
  • Pytorch 2.0.1+cu117
  • Datasets 2.14.5
  • Tokenizers 0.13.3