fullstuck's picture
End of training
de0080c
|
raw
history blame
4.75 kB
metadata
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: image_classification
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: en-US
          split: train
          args: en-US
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.525

image_classification

This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.9905
  • Accuracy: 0.525

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 40 1.7409 0.2875
No log 2.0 80 1.5124 0.4375
No log 3.0 120 1.4255 0.4437
No log 4.0 160 1.4154 0.425
No log 5.0 200 1.2886 0.4625
No log 6.0 240 1.2963 0.5125
No log 7.0 280 1.3139 0.55
No log 8.0 320 1.2976 0.5312
No log 9.0 360 1.4368 0.5062
No log 10.0 400 1.4022 0.5062
No log 11.0 440 1.2853 0.55
No log 12.0 480 1.3265 0.5563
0.8831 13.0 520 1.3894 0.55
0.8831 14.0 560 1.4465 0.5312
0.8831 15.0 600 1.7185 0.475
0.8831 16.0 640 1.7408 0.4875
0.8831 17.0 680 1.5199 0.5437
0.8831 18.0 720 1.7238 0.525
0.8831 19.0 760 1.8348 0.4875
0.8831 20.0 800 1.6278 0.5125
0.8831 21.0 840 1.7539 0.5
0.8831 22.0 880 1.9007 0.4938
0.8831 23.0 920 1.6903 0.5375
0.8831 24.0 960 1.7954 0.5062
0.2214 25.0 1000 1.7070 0.575
0.2214 26.0 1040 1.6764 0.5625
0.2214 27.0 1080 1.8590 0.5188
0.2214 28.0 1120 1.7531 0.5188
0.2214 29.0 1160 1.5238 0.5875
0.2214 30.0 1200 1.6463 0.6
0.2214 31.0 1240 1.7955 0.5563
0.2214 32.0 1280 1.9920 0.5
0.2214 33.0 1320 1.8826 0.55
0.2214 34.0 1360 2.0573 0.5
0.2214 35.0 1400 1.8438 0.5312
0.2214 36.0 1440 1.9004 0.5312
0.2214 37.0 1480 1.8215 0.5437
0.1479 38.0 1520 2.0467 0.5437
0.1479 39.0 1560 1.8564 0.5687
0.1479 40.0 1600 1.8381 0.5687
0.1479 41.0 1640 1.8110 0.5687
0.1479 42.0 1680 2.0587 0.5375
0.1479 43.0 1720 1.9597 0.5687
0.1479 44.0 1760 1.9199 0.5687
0.1479 45.0 1800 1.8714 0.5312
0.1479 46.0 1840 1.9463 0.575
0.1479 47.0 1880 2.0449 0.5312
0.1479 48.0 1920 1.9172 0.525
0.1479 49.0 1960 1.9153 0.55
0.0946 50.0 2000 1.9905 0.525

Framework versions

  • Transformers 4.33.2
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.13.3