mfigurski80 commited on
Commit
358f13b
·
1 Parent(s): 55b6f2f

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -7
README.md CHANGED
@@ -14,7 +14,7 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
- - Loss: 0.9602
18
 
19
  ## Model description
20
 
@@ -35,23 +35,24 @@ More information needed
35
  The following hyperparameters were used during training:
36
  - learning_rate: 5e-05
37
  - train_batch_size: 16
38
- - eval_batch_size: 64
39
  - seed: 42
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
- - num_epochs: 2
43
 
44
  ### Training results
45
 
46
  | Training Loss | Epoch | Step | Validation Loss |
47
  |:-------------:|:-----:|:----:|:---------------:|
48
- | 0.9434 | 1.0 | 2812 | 0.9601 |
49
- | 0.9429 | 2.0 | 5624 | 0.9602 |
 
50
 
51
 
52
  ### Framework versions
53
 
54
  - Transformers 4.24.0
55
  - Pytorch 1.12.1+cu113
56
- - Datasets 2.6.1
57
- - Tokenizers 0.13.1
 
14
 
15
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Loss: 0.6465
18
 
19
  ## Model description
20
 
 
35
  The following hyperparameters were used during training:
36
  - learning_rate: 5e-05
37
  - train_batch_size: 16
38
+ - eval_batch_size: 32
39
  - seed: 42
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
+ - num_epochs: 3
43
 
44
  ### Training results
45
 
46
  | Training Loss | Epoch | Step | Validation Loss |
47
  |:-------------:|:-----:|:----:|:---------------:|
48
+ | 0.6363 | 1.0 | 2812 | 0.6465 |
49
+ | 0.6361 | 2.0 | 5624 | 0.6465 |
50
+ | 0.6358 | 3.0 | 8436 | 0.6465 |
51
 
52
 
53
  ### Framework versions
54
 
55
  - Transformers 4.24.0
56
  - Pytorch 1.12.1+cu113
57
+ - Datasets 2.7.0
58
+ - Tokenizers 0.13.2