LaLegumbreArtificial commited on
Commit
88b102a
·
verified ·
1 Parent(s): fff31f8

End of training

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -13,17 +13,13 @@ model-index:
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
  should probably proofread and complete it, then remove this comment. -->
15
 
16
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/jose-contreras-itj/huggingface/runs/vl3tqkdm)
17
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/jose-contreras-itj/huggingface/runs/vl3tqkdm)
18
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/jose-contreras-itj/huggingface/runs/vl3tqkdm)
19
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/jose-contreras-itj/huggingface/runs/vl3tqkdm)
20
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/jose-contreras-itj/huggingface/runs/vl3tqkdm)
21
  # Fraunhofer_Classical
22
 
23
  This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on an unknown dataset.
24
  It achieves the following results on the evaluation set:
25
- - Loss: 0.0580
26
- - Accuracy: 0.9787
27
 
28
  ## Model description
29
 
@@ -51,13 +47,17 @@ The following hyperparameters were used during training:
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
  - lr_scheduler_warmup_ratio: 0.1
54
- - num_epochs: 1
55
 
56
  ### Training results
57
 
58
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
59
  |:-------------:|:------:|:----:|:---------------:|:--------:|
60
- | 0.0444 | 0.9954 | 109 | 0.0580 | 0.9787 |
 
 
 
 
61
 
62
 
63
  ### Framework versions
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
  should probably proofread and complete it, then remove this comment. -->
15
 
16
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/jose-contreras-itj/huggingface/runs/p10pmnhn)
 
 
 
 
17
  # Fraunhofer_Classical
18
 
19
  This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.0235
22
+ - Accuracy: 0.9907
23
 
24
  ## Model description
25
 
 
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
49
  - lr_scheduler_warmup_ratio: 0.1
50
+ - num_epochs: 5
51
 
52
  ### Training results
53
 
54
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
55
  |:-------------:|:------:|:----:|:---------------:|:--------:|
56
+ | 0.0608 | 0.9954 | 109 | 0.0879 | 0.97 |
57
+ | 0.0607 | 2.0 | 219 | 0.0461 | 0.9833 |
58
+ | 0.0436 | 2.9954 | 328 | 0.0351 | 0.9873 |
59
+ | 0.0202 | 4.0 | 438 | 0.0333 | 0.9883 |
60
+ | 0.0236 | 4.9772 | 545 | 0.0235 | 0.9907 |
61
 
62
 
63
  ### Framework versions