Nared45 commited on
Commit
f1f6956
·
verified ·
1 Parent(s): f27beb1

Nared45/roberta-base_correlation

Browse files
README.md CHANGED
@@ -1,6 +1,7 @@
1
  ---
 
2
  license: mit
3
- base_model: roberta-base
4
  tags:
5
  - generated_from_trainer
6
  model-index:
@@ -13,9 +14,9 @@ should probably proofread and complete it, then remove this comment. -->
13
 
14
  # roberta-base_correlation
15
 
16
- This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 0.4129
19
 
20
  ## Model description
21
 
@@ -34,44 +35,28 @@ More information needed
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
- - learning_rate: 1e-05
38
  - train_batch_size: 32
39
- - eval_batch_size: 16
40
  - seed: 42
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
- - lr_scheduler_warmup_steps: 5000
44
- - num_epochs: 20
45
 
46
  ### Training results
47
 
48
  | Training Loss | Epoch | Step | Validation Loss |
49
  |:-------------:|:-----:|:----:|:---------------:|
50
- | 0.6673 | 1.0 | 50 | 0.6690 |
51
- | 0.6707 | 2.0 | 100 | 0.6668 |
52
- | 0.6593 | 3.0 | 150 | 0.6631 |
53
- | 0.6597 | 4.0 | 200 | 0.6583 |
54
- | 0.6476 | 5.0 | 250 | 0.6536 |
55
- | 0.6511 | 6.0 | 300 | 0.6487 |
56
- | 0.6342 | 7.0 | 350 | 0.6459 |
57
- | 0.6408 | 8.0 | 400 | 0.6427 |
58
- | 0.6318 | 9.0 | 450 | 0.6405 |
59
- | 0.6556 | 10.0 | 500 | 0.6372 |
60
- | 0.6141 | 11.0 | 550 | 0.6289 |
61
- | 0.59 | 12.0 | 600 | 0.6089 |
62
- | 0.5781 | 13.0 | 650 | 0.5815 |
63
- | 0.5529 | 14.0 | 700 | 0.5550 |
64
- | 0.5367 | 15.0 | 750 | 0.5355 |
65
- | 0.5107 | 16.0 | 800 | 0.5014 |
66
- | 0.4441 | 17.0 | 850 | 0.4775 |
67
- | 0.4206 | 18.0 | 900 | 0.4477 |
68
- | 0.3608 | 19.0 | 950 | 0.4302 |
69
- | 0.3241 | 20.0 | 1000 | 0.4129 |
70
 
71
 
72
  ### Framework versions
73
 
74
- - Transformers 4.38.2
75
- - Pytorch 2.2.1+cu121
76
- - Datasets 2.18.0
77
- - Tokenizers 0.15.2
 
1
  ---
2
+ library_name: transformers
3
  license: mit
4
+ base_model: FacebookAI/roberta-base
5
  tags:
6
  - generated_from_trainer
7
  model-index:
 
14
 
15
  # roberta-base_correlation
16
 
17
+ This model is a fine-tuned version of [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 0.7933
20
 
21
  ## Model description
22
 
 
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
+ - learning_rate: 2e-05
39
  - train_batch_size: 32
40
+ - eval_batch_size: 32
41
  - seed: 42
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
44
+ - num_epochs: 5
 
45
 
46
  ### Training results
47
 
48
  | Training Loss | Epoch | Step | Validation Loss |
49
  |:-------------:|:-----:|:----:|:---------------:|
50
+ | No log | 1.0 | 78 | 0.5864 |
51
+ | No log | 2.0 | 156 | 0.4928 |
52
+ | No log | 3.0 | 234 | 0.5737 |
53
+ | No log | 4.0 | 312 | 0.8163 |
54
+ | No log | 5.0 | 390 | 0.7933 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
 
56
 
57
  ### Framework versions
58
 
59
+ - Transformers 4.44.2
60
+ - Pytorch 2.2.0
61
+ - Datasets 2.21.0
62
+ - Tokenizers 0.19.1
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "roberta-base",
3
  "architectures": [
4
  "RobertaForSequenceClassification"
5
  ],
@@ -10,10 +10,6 @@
10
  "hidden_act": "gelu",
11
  "hidden_dropout_prob": 0.1,
12
  "hidden_size": 768,
13
- "id2label": {
14
- "0": 0,
15
- "1": 1
16
- },
17
  "initializer_range": 0.02,
18
  "intermediate_size": 3072,
19
  "layer_norm_eps": 1e-05,
@@ -25,7 +21,7 @@
25
  "position_embedding_type": "absolute",
26
  "problem_type": "single_label_classification",
27
  "torch_dtype": "float32",
28
- "transformers_version": "4.38.2",
29
  "type_vocab_size": 1,
30
  "use_cache": true,
31
  "vocab_size": 50265
 
1
  {
2
+ "_name_or_path": "FacebookAI/roberta-base",
3
  "architectures": [
4
  "RobertaForSequenceClassification"
5
  ],
 
10
  "hidden_act": "gelu",
11
  "hidden_dropout_prob": 0.1,
12
  "hidden_size": 768,
 
 
 
 
13
  "initializer_range": 0.02,
14
  "intermediate_size": 3072,
15
  "layer_norm_eps": 1e-05,
 
21
  "position_embedding_type": "absolute",
22
  "problem_type": "single_label_classification",
23
  "torch_dtype": "float32",
24
+ "transformers_version": "4.44.2",
25
  "type_vocab_size": 1,
26
  "use_cache": true,
27
  "vocab_size": 50265
logs/events.out.tfevents.1725506043.ip-172-16-38-49.ap-southeast-1.compute.internal.10329.4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e803e5e6f1bb7f35b8ed4edfcc31ee71918584a28a94937f970bac9029d09382
3
+ size 6684
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:008689bea907a0d7481c5a02577b710599c158d5315333813f74115f5c2a2356
3
  size 498612824
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ec21638f24684c1546bc372e847e7c2d4d3314b41ff5be9fb33aab3d6cf68e5
3
  size 498612824
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:391f24090d407690b115a10ccfa86b01920758645eaba9a96e7d31b39776ef10
3
- size 4920
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:96fd45553baa0af07050035988cba31fa218b42f352d9e5413f61739ad3faf69
3
+ size 5176