End of training
Browse files- README.md +45 -45
- model.safetensors +1 -1
README.md
CHANGED
@@ -17,58 +17,58 @@ should probably proofread and complete it, then remove this comment. -->
|
|
17 |
|
18 |
This model is a fine-tuned version of [Goader/liberta-large](https://huggingface.co/Goader/liberta-large) on the universal_dependencies dataset.
|
19 |
It achieves the following results on the evaluation set:
|
20 |
-
- Loss:
|
21 |
-
- : {'precision': 0.
|
22 |
-
- Arataxis: {'precision': 0.
|
23 |
-
- Arataxis:discourse: {'precision': 0.
|
24 |
- Arataxis:rel: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5}
|
25 |
-
- Ark: {'precision': 0.
|
26 |
-
- Ase: {'precision': 0.
|
27 |
-
- Bj: {'precision': 0.
|
28 |
-
- Bl: {'precision': 0.
|
29 |
-
- C: {'precision': 0.
|
30 |
-
- Cl: {'precision': 0.
|
31 |
- Cl:adv: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3}
|
32 |
-
- Cl:relcl: {'precision': 0.
|
33 |
-
- Comp: {'precision': 0.
|
34 |
-
- Comp:sp: {'precision': 0.
|
35 |
-
- Dvcl: {'precision': 0.
|
36 |
- Dvcl:sp: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2}
|
37 |
- Dvcl:svc: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3}
|
38 |
-
- Dvmod: {'precision': 0.
|
39 |
-
- Dvmod:det: {'precision':
|
40 |
-
- Et: {'precision': 0.
|
41 |
-
- Et:numgov: {'precision': 0.
|
42 |
- Et:nummod: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1}
|
43 |
-
- Iscourse: {'precision': 0.
|
44 |
- Islocated: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4}
|
45 |
-
- Ixed: {'precision': 0.
|
46 |
- Lat:abs: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2}
|
47 |
-
- Lat:foreign: {'precision': 0.
|
48 |
-
- Lat:name: {'precision': 0.
|
49 |
-
- Lat:range: {'precision':
|
50 |
- Lat:repeat: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3}
|
51 |
-
- Lat:title: {'precision': 0.
|
52 |
-
- Mod: {'precision': 0.
|
53 |
-
- Obj: {'precision': 0.
|
54 |
- Ocative: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1}
|
55 |
- Oeswith: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1}
|
56 |
-
- Ompound: {'precision': 0.
|
57 |
-
- Onj: {'precision': 0.
|
58 |
-
- Oot: {'precision': 0.
|
59 |
-
- Op: {'precision': 0.
|
60 |
-
- Ppos: {'precision': 0.
|
61 |
-
- Rphan: {'precision': 0.
|
62 |
-
- Subj: {'precision': 0.
|
63 |
-
- Ummod: {'precision': 0.
|
64 |
-
- Ummod:gov: {'precision': 0.
|
65 |
-
- Unct: {'precision': 0.
|
66 |
-
- Ux: {'precision': 0.
|
67 |
-
- Xpl: {'precision': 0.
|
68 |
-
- Overall Precision: 0.
|
69 |
-
- Overall Recall: 0.
|
70 |
-
- Overall F1: 0.
|
71 |
-
- Overall Accuracy: 0.
|
72 |
|
73 |
## Model description
|
74 |
|
@@ -88,12 +88,12 @@ More information needed
|
|
88 |
|
89 |
The following hyperparameters were used during training:
|
90 |
- learning_rate: 5e-05
|
91 |
-
- train_batch_size:
|
92 |
- eval_batch_size: 8
|
93 |
- seed: 42
|
94 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
95 |
- lr_scheduler_type: linear
|
96 |
-
- num_epochs:
|
97 |
|
98 |
### Training results
|
99 |
|
@@ -103,5 +103,5 @@ The following hyperparameters were used during training:
|
|
103 |
|
104 |
- Transformers 4.39.3
|
105 |
- Pytorch 1.11.0a0+17540c5
|
106 |
-
- Datasets 2.
|
107 |
- Tokenizers 0.15.2
|
|
|
17 |
|
18 |
This model is a fine-tuned version of [Goader/liberta-large](https://huggingface.co/Goader/liberta-large) on the universal_dependencies dataset.
|
19 |
It achieves the following results on the evaluation set:
|
20 |
+
- Loss: 0.6796
|
21 |
+
- : {'precision': 0.6666666666666666, 'recall': 0.15384615384615385, 'f1': 0.25, 'number': 13}
|
22 |
+
- Arataxis: {'precision': 0.5490196078431373, 'recall': 0.3010752688172043, 'f1': 0.38888888888888895, 'number': 93}
|
23 |
+
- Arataxis:discourse: {'precision': 0.42857142857142855, 'recall': 0.35294117647058826, 'f1': 0.3870967741935484, 'number': 17}
|
24 |
- Arataxis:rel: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5}
|
25 |
+
- Ark: {'precision': 0.7448275862068966, 'recall': 0.6835443037974683, 'f1': 0.712871287128713, 'number': 158}
|
26 |
+
- Ase: {'precision': 0.8467852257181943, 'recall': 0.7512135922330098, 'f1': 0.7961414790996785, 'number': 824}
|
27 |
+
- Bj: {'precision': 0.7874659400544959, 'recall': 0.6897374701670644, 'f1': 0.7353689567430025, 'number': 419}
|
28 |
+
- Bl: {'precision': 0.808300395256917, 'recall': 0.6650406504065041, 'f1': 0.7297056199821588, 'number': 615}
|
29 |
+
- C: {'precision': 0.8027210884353742, 'recall': 0.6982248520710059, 'f1': 0.7468354430379747, 'number': 338}
|
30 |
+
- Cl: {'precision': 0.8461538461538461, 'recall': 0.34375, 'f1': 0.4888888888888889, 'number': 32}
|
31 |
- Cl:adv: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3}
|
32 |
+
- Cl:relcl: {'precision': 0.7383177570093458, 'recall': 0.6370967741935484, 'f1': 0.6839826839826839, 'number': 124}
|
33 |
+
- Comp: {'precision': 0.75, 'recall': 0.711864406779661, 'f1': 0.7304347826086958, 'number': 118}
|
34 |
+
- Comp:sp: {'precision': 0.6818181818181818, 'recall': 0.5357142857142857, 'f1': 0.6, 'number': 28}
|
35 |
+
- Dvcl: {'precision': 0.7466666666666667, 'recall': 0.6292134831460674, 'f1': 0.6829268292682926, 'number': 89}
|
36 |
- Dvcl:sp: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2}
|
37 |
- Dvcl:svc: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3}
|
38 |
+
- Dvmod: {'precision': 0.7583333333333333, 'recall': 0.6859296482412061, 'f1': 0.7203166226912929, 'number': 398}
|
39 |
+
- Dvmod:det: {'precision': 1.0, 'recall': 0.5, 'f1': 0.6666666666666666, 'number': 2}
|
40 |
+
- Et: {'precision': 0.8235294117647058, 'recall': 0.7216494845360825, 'f1': 0.7692307692307693, 'number': 194}
|
41 |
+
- Et:numgov: {'precision': 0.6363636363636364, 'recall': 0.7777777777777778, 'f1': 0.7000000000000001, 'number': 9}
|
42 |
- Et:nummod: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1}
|
43 |
+
- Iscourse: {'precision': 0.6372549019607843, 'recall': 0.5371900826446281, 'f1': 0.5829596412556053, 'number': 121}
|
44 |
- Islocated: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4}
|
45 |
+
- Ixed: {'precision': 0.25, 'recall': 0.10526315789473684, 'f1': 0.14814814814814814, 'number': 19}
|
46 |
- Lat:abs: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2}
|
47 |
+
- Lat:foreign: {'precision': 0.3333333333333333, 'recall': 0.15789473684210525, 'f1': 0.21428571428571427, 'number': 19}
|
48 |
+
- Lat:name: {'precision': 0.6666666666666666, 'recall': 0.43137254901960786, 'f1': 0.5238095238095237, 'number': 51}
|
49 |
+
- Lat:range: {'precision': 0.5555555555555556, 'recall': 0.45454545454545453, 'f1': 0.5, 'number': 11}
|
50 |
- Lat:repeat: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3}
|
51 |
+
- Lat:title: {'precision': 0.5, 'recall': 0.3381294964028777, 'f1': 0.4034334763948498, 'number': 139}
|
52 |
+
- Mod: {'precision': 0.7354392892398816, 'recall': 0.6378424657534246, 'f1': 0.6831728564878496, 'number': 1168}
|
53 |
+
- Obj: {'precision': 0.5, 'recall': 0.6428571428571429, 'f1': 0.5625000000000001, 'number': 14}
|
54 |
- Ocative: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1}
|
55 |
- Oeswith: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1}
|
56 |
+
- Ompound: {'precision': 0.6206896551724138, 'recall': 0.32727272727272727, 'f1': 0.42857142857142855, 'number': 55}
|
57 |
+
- Onj: {'precision': 0.6871165644171779, 'recall': 0.5197215777262181, 'f1': 0.5918097754293263, 'number': 431}
|
58 |
+
- Oot: {'precision': 0.9298969072164949, 'recall': 0.902, 'f1': 0.915736040609137, 'number': 500}
|
59 |
+
- Op: {'precision': 0.75, 'recall': 0.6428571428571429, 'f1': 0.6923076923076924, 'number': 42}
|
60 |
+
- Ppos: {'precision': 0.35714285714285715, 'recall': 0.21428571428571427, 'f1': 0.2678571428571429, 'number': 70}
|
61 |
+
- Rphan: {'precision': 0.3333333333333333, 'recall': 0.08333333333333333, 'f1': 0.13333333333333333, 'number': 12}
|
62 |
+
- Subj: {'precision': 0.8471001757469244, 'recall': 0.7761674718196457, 'f1': 0.8100840336134454, 'number': 621}
|
63 |
+
- Ummod: {'precision': 0.5294117647058824, 'recall': 0.5294117647058824, 'f1': 0.5294117647058824, 'number': 34}
|
64 |
+
- Ummod:gov: {'precision': 0.56, 'recall': 0.3888888888888889, 'f1': 0.45901639344262296, 'number': 36}
|
65 |
+
- Unct: {'precision': 0.8179775280898877, 'recall': 0.674074074074074, 'f1': 0.7390862944162436, 'number': 1620}
|
66 |
+
- Ux: {'precision': 0.6363636363636364, 'recall': 0.4375, 'f1': 0.5185185185185185, 'number': 16}
|
67 |
+
- Xpl: {'precision': 0.625, 'recall': 0.7142857142857143, 'f1': 0.6666666666666666, 'number': 7}
|
68 |
+
- Overall Precision: 0.7829
|
69 |
+
- Overall Recall: 0.6620
|
70 |
+
- Overall F1: 0.7174
|
71 |
+
- Overall Accuracy: 0.7711
|
72 |
|
73 |
## Model description
|
74 |
|
|
|
88 |
|
89 |
The following hyperparameters were used during training:
|
90 |
- learning_rate: 5e-05
|
91 |
+
- train_batch_size: 16
|
92 |
- eval_batch_size: 8
|
93 |
- seed: 42
|
94 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
95 |
- lr_scheduler_type: linear
|
96 |
+
- num_epochs: 10
|
97 |
|
98 |
### Training results
|
99 |
|
|
|
103 |
|
104 |
- Transformers 4.39.3
|
105 |
- Pytorch 1.11.0a0+17540c5
|
106 |
+
- Datasets 2.21.0
|
107 |
- Tokenizers 0.15.2
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 1342707664
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:50aa28a1a3b5c8553bac6de3f6f63ea8553bc3041063100bdc26709ae899f6e5
|
3 |
size 1342707664
|