hplisiecki
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -8,11 +8,11 @@ pipeline_tag: text-classification
|
|
8 |
|
9 |
## Model Description
|
10 |
|
11 |
-
This transformer-based model is designed to extrapolate affective norms for German words, including metrics such as valence, arousal, and dominance. It has been fine-tuned from the German BERT Model (https://huggingface.co/dbmdz/bert-base-german-uncased), enhanced with additional layers to predict the affective dimensions. This model was first released as a part of the publication: "Extrapolation of affective norms using transformer-based neural networks and its application to experimental stimuli selection." (Plisiecki, Sobieszek; 2023) [https://doi.org/10.3758/s13428-023-02212-3]
|
12 |
|
13 |
## Training Data
|
14 |
|
15 |
-
The model was trained on the BAWL-R dataset for German by Võ et al. (2009) [https://doi.org/10.3758/BRM.41.2.534], which includes 2902 words rated by participants on various emotional and semantic dimensions. The dataset was split into training, validation, and test sets in an 8:1:1 ratio.
|
16 |
|
17 |
## Performance
|
18 |
|
|
|
8 |
|
9 |
## Model Description
|
10 |
|
11 |
+
This transformer-based model is designed to extrapolate affective norms for German words, including metrics such as valence, arousal, and dominance. It has been fine-tuned from the German BERT Model (https://huggingface.co/dbmdz/bert-base-german-uncased), enhanced with additional layers to predict the affective dimensions. This model was first released as a part of the publication: "Extrapolation of affective norms using transformer-based neural networks and its application to experimental stimuli selection." (Plisiecki, Sobieszek; 2023) [ https://doi.org/10.3758/s13428-023-02212-3 ]
|
12 |
|
13 |
## Training Data
|
14 |
|
15 |
+
The model was trained on the BAWL-R dataset for German by Võ et al. (2009) [ https://doi.org/10.3758/BRM.41.2.534 ], which includes 2902 words rated by participants on various emotional and semantic dimensions. The dataset was split into training, validation, and test sets in an 8:1:1 ratio.
|
16 |
|
17 |
## Performance
|
18 |
|