File size: 2,604 Bytes
c1af7ce
 
 
 
d78c8fd
b9d9f99
c1af7ce
 
 
 
 
3575e1a
c1af7ce
 
 
e5c96ca
c1af7ce
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
efa5b0c
c1af7ce
 
 
 
 
 
 
14cf72d
c1af7ce
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
---
license: apache-2.0
language:
- de
pipeline_tag: text-classification
inference: false
---
# Affective Norms Extrapolation Model for German Language

## Model Description

This transformer-based model is designed to extrapolate affective norms for German words, including metrics such as valence, arousal, and imageability. It has been fine-tuned from the German BERT Model (https://huggingface.co/dbmdz/bert-base-german-uncased), enhanced with additional layers to predict the affective dimensions. This model was first released as a part of the publication: "Extrapolation of affective norms using transformer-based neural networks and its application to experimental stimuli selection." (Plisiecki, Sobieszek; 2023) [ https://doi.org/10.3758/s13428-023-02212-3 ]

## Training Data

The model was trained on the BAWL-R dataset for German by Võ et al. (2009) [ https://doi.org/10.3758/BRM.41.2.534 ], which includes 2902 words rated by participants on various emotional and semantic dimensions. The dataset was split into training, validation, and test sets in an 8:1:1 ratio.

## Performance

The model achieved the following Pearson correlations with human judgments on the test set:

- Valence: 0.80
- Arousal: 0.70
- Imageability: 0.82


## Usage

You can use the model and tokenizer as follows:

First run the bash code below to clone the repository (this will take some time). Because of the custom model class, this model cannot be run with the usual huggingface Model setups.

```bash
git clone https://huggingface.co/hplisiecki/word2affect_german
```

Proceed as follows:

```python
from word2affect_german.model_script import CustomModel # importing the custom model class
from transformers import AutoTokenizer

model_directory = "word2affect_german" # path to the cloned repository

model = CustomModel.from_pretrained(model_directory)
tokenizer = AutoTokenizer.from_pretrained(model_directory)

inputs = tokenizer("test", return_tensors="pt")
outputs = model(inputs['input_ids'], inputs['attention_mask'])

# Print out the emotion ratings
for emotion, rating in zip(['Valence', 'Arousal', 'Imageability'], outputs):
    print(f"{emotion}: {rating.item()}")
```

## Citation

If you use this model please cite the following paper.

```sql
@article{Plisiecki_Sobieszek_2023,
  title={Extrapolation of affective norms using transformer-based neural networks and its application to experimental stimuli selection},
  author={Plisiecki, Hubert and Sobieszek, Adam},
  journal={Behavior Research Methods},
  year={2023},
  pages={1-16}
  doi={https://doi.org/10.3758/s13428-023-02212-3}
}
```