olipol commited on
Commit
004d3f0
·
verified ·
1 Parent(s): 9bf8589

olipol/smaug_part1

Browse files
README.md CHANGED
@@ -1,55 +1,66 @@
1
- ---
2
- library_name: transformers
3
- base_model: dkleczek/bert-base-polish-cased-v1
4
- tags:
5
- - generated_from_trainer
6
- datasets:
7
- - olipol/uj1
8
- model-index:
9
- - name: smaug_part1
10
- results: []
11
- ---
12
-
13
- # smaug_part1
14
-
15
- This model is a fine-tuned version of [dkleczek/bert-base-polish-cased-v1](https://huggingface.co/dkleczek/bert-base-polish-cased-v1) on an Jagiellonian University dataset.
16
- It achieves the following results on the evaluation set:
17
- - Loss: 0.0000
18
-
19
- ## Model description
20
-
21
- Model recognizes whether a given sentence applies to the Jagiellonian University (trained on polish dataset)
22
-
23
- ### Training hyperparameters
24
-
25
- The following hyperparameters were used during training:
26
- - learning_rate: 2e-05
27
- - train_batch_size: 8
28
- - eval_batch_size: 8
29
- - seed: 42
30
- - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
31
- - lr_scheduler_type: linear
32
- - num_epochs: 10
33
-
34
- ### Training results
35
-
36
- | Training Loss | Epoch | Step | Validation Loss |
37
- |:-------------:|:-----:|:----:|:---------------:|
38
- | No log | 1.0 | 125 | 0.0002 |
39
- | No log | 2.0 | 250 | 0.0001 |
40
- | No log | 3.0 | 375 | 0.0000 |
41
- | 0.0169 | 4.0 | 500 | 0.0000 |
42
- | 0.0169 | 5.0 | 625 | 0.0000 |
43
- | 0.0169 | 6.0 | 750 | 0.0000 |
44
- | 0.0169 | 7.0 | 875 | 0.0000 |
45
- | 0.0 | 8.0 | 1000 | 0.0000 |
46
- | 0.0 | 9.0 | 1125 | 0.0000 |
47
- | 0.0 | 10.0 | 1250 | 0.0000 |
48
-
49
-
50
- ### Framework versions
51
-
52
- - Transformers 4.46.3
53
- - Pytorch 2.5.1+cu118
54
- - Datasets 3.1.0
55
- - Tokenizers 0.20.3
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ base_model: dkleczek/bert-base-polish-cased-v1
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: smaug_part1
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # smaug_part1
15
+
16
+ This model is a fine-tuned version of [dkleczek/bert-base-polish-cased-v1](https://huggingface.co/dkleczek/bert-base-polish-cased-v1) on an unknown dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 0.0000
19
+
20
+ ## Model description
21
+
22
+ More information needed
23
+
24
+ ## Intended uses & limitations
25
+
26
+ More information needed
27
+
28
+ ## Training and evaluation data
29
+
30
+ More information needed
31
+
32
+ ## Training procedure
33
+
34
+ ### Training hyperparameters
35
+
36
+ The following hyperparameters were used during training:
37
+ - learning_rate: 2e-05
38
+ - train_batch_size: 8
39
+ - eval_batch_size: 8
40
+ - seed: 42
41
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
42
+ - lr_scheduler_type: linear
43
+ - num_epochs: 10
44
+
45
+ ### Training results
46
+
47
+ | Training Loss | Epoch | Step | Validation Loss |
48
+ |:-------------:|:-----:|:----:|:---------------:|
49
+ | 0.027 | 1.0 | 599 | 0.0003 |
50
+ | 0.0052 | 2.0 | 1198 | 0.0001 |
51
+ | 0.0026 | 3.0 | 1797 | 0.0000 |
52
+ | 0.0011 | 4.0 | 2396 | 0.0000 |
53
+ | 0.0 | 5.0 | 2995 | 0.0000 |
54
+ | 0.0 | 6.0 | 3594 | 0.0000 |
55
+ | 0.0 | 7.0 | 4193 | 0.0000 |
56
+ | 0.0 | 8.0 | 4792 | 0.0000 |
57
+ | 0.0 | 9.0 | 5391 | 0.0000 |
58
+ | 0.0 | 10.0 | 5990 | 0.0000 |
59
+
60
+
61
+ ### Framework versions
62
+
63
+ - Transformers 4.46.3
64
+ - Pytorch 2.5.1+cu118
65
+ - Datasets 3.1.0
66
+ - Tokenizers 0.20.3
runs/Dec12_23-53-54_DESKTOP-HMVHQ8L/events.out.tfevents.1734044040.DESKTOP-HMVHQ8L.8360.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:78ba57044f617a014756f04ad1c15d1625ba89f7750b4730aa1920e993dcb732
3
+ size 5043
runs/Dec12_23-55-56_DESKTOP-HMVHQ8L/events.out.tfevents.1734044159.DESKTOP-HMVHQ8L.5136.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:845c75cb6238cea78ee4797721513e66325d7d7a22db21cc756d8148b0c2d1ea
3
+ size 10428
tokenizer.json CHANGED
@@ -1,7 +1,21 @@
1
  {
2
  "version": "1.0",
3
- "truncation": null,
4
- "padding": null,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  "added_tokens": [
6
  {
7
  "id": 0,
 
1
  {
2
  "version": "1.0",
3
+ "truncation": {
4
+ "direction": "Right",
5
+ "max_length": 512,
6
+ "strategy": "LongestFirst",
7
+ "stride": 0
8
+ },
9
+ "padding": {
10
+ "strategy": {
11
+ "Fixed": 512
12
+ },
13
+ "direction": "Right",
14
+ "pad_to_multiple_of": null,
15
+ "pad_id": 0,
16
+ "pad_type_id": 0,
17
+ "pad_token": "[PAD]"
18
+ },
19
  "added_tokens": [
20
  {
21
  "id": 0,
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:dbac4e2baa891c7159e240aee3ab37355498851a04c31598001c6513fb25bd61
3
  size 5304
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ee4ec0e4cb4e1f0abc2d5a72d4e8ade486f0f25cc6d9f22d8c89f91075444ef3
3
  size 5304