albertmartinez commited on
Commit
7dd9865
·
verified ·
1 Parent(s): ab29049

Model save

Browse files
Files changed (1) hide show
  1. README.md +44 -45
README.md CHANGED
@@ -1,54 +1,53 @@
1
  ---
2
- datasets:
3
- - albertmartinez/OSDG
4
- license: mit
5
- metrics:
6
- - accuracy
7
- - precision
8
- - recall
9
- - f1
10
- pipeline_tag: text-classification
11
- widget:
12
- - text: Between the Social and the Spatial - Exploring Multiple Dimensions of Poverty
13
- and Social Exclusion, Ashgate. Poverty in Europe and the USA, Exchanging Official
14
- Measurement Methods”, Maastricht Graduate School of Governance Working Paper 2007/005.
15
- Monitoring Absolute and Relative Poverty, ‘Not Enough’ Is Not the Same as ‘Much
16
- Less’”, Review of Income and Wealth, 57(2), 247-269. Poverty and Social Exclusion
17
- in Britain, The Policy Press, Bristol.
18
- - text: A circular economy is a way of achieving sustainable consumption and production,
19
- as well as nature positive outcomes.
20
  model-index:
21
- - name: albertmartinez/bert-multilingual-sdg-classification
22
- results:
23
- - task:
24
- type: text-generation
25
- name: Text Generation
26
- dataset:
27
- name: albertmartinez/OSDG
28
- type: albertmartinez/OSDG
29
- split: test
30
- metrics:
31
- - type: accuracy
32
- value: 0.8063482135876788
33
- name: accuracy
34
  ---
35
 
36
- # albertmartinez/bert-multilingual-sdg-classification
 
37
 
38
- This model (BERT) is for classifying text with respect to the United Nations sustainable development goals (SDG).
39
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
 
41
  ### Training results
42
 
43
- | epoch | eval_loss | eval_accuracy | eval_precision | eval_recall | eval_f1 |
44
- |:-----:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|
45
- | 1 | 0.932697594165802 | 0.7336408412164803 | 0.7360917875267202 | 0.7336408412164803 | 0.7251914694909984 |
46
- | 2 | 0.7605898976325989 | 0.777343444609491 | 0.7831176862515112 | 0.777343444609491 | 0.777992037469195 |
47
- | 3 | 0.7255606651306152 | 0.7918849190837307 | 0.7922937492519824 | 0.7918849190837307 | 0.7897436793752113 |
48
- | 4 | 0.7620322108268738 | 0.782581502619029 | 0.791463371337951 | 0.782581502619029 | 0.7836557631495363 |
49
- | 5 | 0.7925569415092468 | 0.7989211164099758 | 0.7987793726148532 | 0.7989211164099758 | 0.7976319242111907 |
50
- | 6 | 0.8901194930076599 | 0.8008756156672661 | 0.8026922096228485 | 0.8008756156672661 | 0.7999251604304751 |
51
- | 7 | 0.9644309878349304 | 0.797044797122977 | 0.7998531076533889 | 0.797044797122977 | 0.7979133030034837 |
52
- | 8 | 1.0754749774932861 | 0.8050973340630131 | 0.8031314135784893 | 0.8050973340630131 | 0.8026788633046699 |
53
- | 9 | 1.1106163263320923 | 0.8066609334688453 | 0.806263013139723 | 0.8066609334688453 | 0.8061921653404509 |
54
- | 10 | 1.1396784782409668 | 0.8066609334688453 | 0.8056981354865757 | 0.8066609334688453 | 0.8059943909432397 |
 
1
  ---
2
+ license: apache-2.0
3
+ base_model: google-bert/bert-base-multilingual-uncased
4
+ tags:
5
+ - generated_from_trainer
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  model-index:
7
+ - name: bert-multilingual-sdg-classification
8
+ results: []
 
 
 
 
 
 
 
 
 
 
 
9
  ---
10
 
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
 
14
+ # bert-multilingual-sdg-classification
15
 
16
+ This model is a fine-tuned version of [google-bert/bert-base-multilingual-uncased](https://huggingface.co/google-bert/bert-base-multilingual-uncased) on an unknown dataset.
17
+
18
+ ## Model description
19
+
20
+ More information needed
21
+
22
+ ## Intended uses & limitations
23
+
24
+ More information needed
25
+
26
+ ## Training and evaluation data
27
+
28
+ More information needed
29
+
30
+ ## Training procedure
31
+
32
+ ### Training hyperparameters
33
+
34
+ The following hyperparameters were used during training:
35
+ - learning_rate: 2e-05
36
+ - train_batch_size: 32
37
+ - eval_batch_size: 8
38
+ - seed: 42
39
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
40
+ - lr_scheduler_type: linear
41
+ - lr_scheduler_warmup_steps: 600
42
+ - num_epochs: 3.0
43
 
44
  ### Training results
45
 
46
+
47
+
48
+ ### Framework versions
49
+
50
+ - Transformers 4.42.4
51
+ - Pytorch 2.3.1+cu121
52
+ - Datasets 2.20.0
53
+ - Tokenizers 0.19.1