Thienpkae commited on
Commit
77f0f70
·
verified ·
1 Parent(s): f57ba5f

End of training

Browse files
Files changed (1) hide show
  1. README.md +15 -16
README.md CHANGED
@@ -13,14 +13,13 @@ model-index:
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
  should probably proofread and complete it, then remove this comment. -->
15
 
16
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/khackho01125-CMC-University/huggingface/runs/v98t7due)
17
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/khackho01125-CMC-University/huggingface/runs/v98t7due)
18
  # test-model
19
 
20
  This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 0.3880
23
- - Accuracy: 0.8994
24
 
25
  ## Model description
26
 
@@ -39,14 +38,14 @@ More information needed
39
  ### Training hyperparameters
40
 
41
  The following hyperparameters were used during training:
42
- - learning_rate: 2e-05
43
  - train_batch_size: 16
44
  - eval_batch_size: 8
45
  - seed: 42
46
  - gradient_accumulation_steps: 4
47
  - total_train_batch_size: 64
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
- - lr_scheduler_type: linear
50
  - lr_scheduler_warmup_ratio: 0.3
51
  - training_steps: 600
52
  - mixed_precision_training: Native AMP
@@ -55,16 +54,16 @@ The following hyperparameters were used during training:
55
 
56
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
57
  |:-------------:|:------:|:----:|:---------------:|:--------:|
58
- | 1.0477 | 0.9639 | 60 | 0.9638 | 0.5412 |
59
- | 0.9116 | 1.9277 | 120 | 0.7981 | 0.7123 |
60
- | 0.76 | 2.8916 | 180 | 0.6920 | 0.7425 |
61
- | 0.6584 | 3.8554 | 240 | 0.5988 | 0.7666 |
62
- | 0.5391 | 4.8193 | 300 | 0.5216 | 0.8471 |
63
- | 0.463 | 5.7831 | 360 | 0.4832 | 0.8551 |
64
- | 0.3977 | 6.7470 | 420 | 0.4274 | 0.8833 |
65
- | 0.3647 | 7.7108 | 480 | 0.4347 | 0.8753 |
66
- | 0.33 | 8.6747 | 540 | 0.3900 | 0.8833 |
67
- | 0.318 | 9.6386 | 600 | 0.3880 | 0.8994 |
68
 
69
 
70
  ### Framework versions
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
  should probably proofread and complete it, then remove this comment. -->
15
 
16
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/khackho01125-CMC-University/huggingface/runs/l1s9fd18)
 
17
  # test-model
18
 
19
  This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.2593
22
+ - Accuracy: 0.9276
23
 
24
  ## Model description
25
 
 
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
41
+ - learning_rate: 5e-05
42
  - train_batch_size: 16
43
  - eval_batch_size: 8
44
  - seed: 42
45
  - gradient_accumulation_steps: 4
46
  - total_train_batch_size: 64
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
+ - lr_scheduler_type: cosine
49
  - lr_scheduler_warmup_ratio: 0.3
50
  - training_steps: 600
51
  - mixed_precision_training: Native AMP
 
54
 
55
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
56
  |:-------------:|:------:|:----:|:---------------:|:--------:|
57
+ | 1.0164 | 0.9639 | 60 | 0.8742 | 0.6519 |
58
+ | 0.8359 | 1.9277 | 120 | 0.6836 | 0.7304 |
59
+ | 0.642 | 2.8916 | 180 | 0.5834 | 0.7988 |
60
+ | 0.4896 | 3.8554 | 240 | 0.5158 | 0.8169 |
61
+ | 0.3864 | 4.8193 | 300 | 0.3212 | 0.8974 |
62
+ | 0.346 | 5.7831 | 360 | 0.2889 | 0.9135 |
63
+ | 0.2796 | 6.7470 | 420 | 0.2591 | 0.9256 |
64
+ | 0.244 | 7.7108 | 480 | 0.2889 | 0.9155 |
65
+ | 0.2058 | 8.6747 | 540 | 0.2615 | 0.9215 |
66
+ | 0.1616 | 9.6386 | 600 | 0.2593 | 0.9276 |
67
 
68
 
69
  ### Framework versions