swinv2-base-patch4-window8-256-dmae-humeda-DAV14

This model is a fine-tuned version of microsoft/swinv2-base-patch4-window8-256 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8478
  • Accuracy: 0.7692

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 40

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 0.8421 4 1.6083 0.2115
No log 1.8421 8 1.5519 0.3077
7.0606 2.8421 12 1.4896 0.3846
7.0606 3.8421 16 1.4160 0.3846
6.3113 4.8421 20 1.3599 0.3269
6.3113 5.8421 24 1.2338 0.3462
6.3113 6.8421 28 1.1538 0.4808
5.1603 7.8421 32 1.0931 0.5577
5.1603 8.8421 36 1.0510 0.5577
3.5688 9.8421 40 0.9583 0.5577
3.5688 10.8421 44 0.9648 0.5577
3.5688 11.8421 48 0.9486 0.6154
2.9736 12.8421 52 0.9201 0.5962
2.9736 13.8421 56 1.0203 0.5577
2.5257 14.8421 60 0.8558 0.6154
2.5257 15.8421 64 0.9309 0.5769
2.5257 16.8421 68 0.9707 0.5769
2.3819 17.8421 72 0.8505 0.6731
2.3819 18.8421 76 0.9245 0.6538
1.9541 19.8421 80 0.9093 0.6731
1.9541 20.8421 84 0.8463 0.7115
1.9541 21.8421 88 0.9135 0.6731
1.7643 22.8421 92 0.8720 0.7115
1.7643 23.8421 96 0.8631 0.7115
1.5146 24.8421 100 0.8862 0.6923
1.5146 25.8421 104 0.8584 0.75
1.5146 26.8421 108 0.9111 0.6923
1.4609 27.8421 112 0.8703 0.75
1.4609 28.8421 116 0.8478 0.7692
1.463 29.8421 120 0.8645 0.75
1.463 30.8421 124 0.9137 0.6731
1.463 31.8421 128 0.9311 0.6731
1.3699 32.8421 132 0.9070 0.7115
1.3699 33.8421 136 0.8930 0.7115
1.2756 34.8421 140 0.8930 0.7115
1.2756 35.8421 144 0.8935 0.7308
1.2756 36.8421 148 0.8960 0.7308
1.273 37.8421 152 0.8951 0.7308
1.273 38.8421 156 0.8955 0.7308
1.2626 39.8421 160 0.8954 0.7308

Framework versions

  • Transformers 4.47.1
  • Pytorch 2.5.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
4
Safetensors
Model size
86.9M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for RobertoSonic/swinv2-base-patch4-window8-256-dmae-humeda-DAV14

Finetuned
(14)
this model