Datasets:
ltg
/

Modalities:
Text
Formats:
parquet
Libraries:
Datasets
pandas
License:
File size: 7,041 Bytes
0fe343b
 
 
 
 
 
 
 
 
 
 
 
90af67d
0fe343b
 
 
 
 
 
 
 
 
162fd48
0fe343b
 
162fd48
0fe343b
 
162fd48
0fe343b
162fd48
 
90af67d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0fe343b
cda69e3
 
 
 
 
 
 
 
0fe343b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ecd0814
0fe343b
 
ecd0814
0fe343b
 
 
 
ecd0814
0fe343b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2b569e3
 
0fe343b
 
 
 
 
 
2b569e3
 
0fe343b
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
---
language:
- 'no'
- nb
- nn
license: cc-by-nc-4.0
size_categories:
- 10K<n<100K
task_categories:
- token-classification
pretty_name: NoReC TSA
dataset_info:
- config_name: default
  features:
  - name: idx
    dtype: string
  - name: tokens
    sequence: string
  - name: tsa_tags
    sequence: string
  splits:
  - name: train
    num_bytes: 2296476
    num_examples: 8634
  - name: validation
    num_bytes: 411562
    num_examples: 1531
  - name: test
    num_bytes: 346288
    num_examples: 1272
  download_size: 899078
  dataset_size: 3054326
- config_name: intensity
  features:
  - name: idx
    dtype: string
  - name: tokens
    sequence: string
  - name: tsa_tags
    sequence: string
  splits:
  - name: train
    num_bytes: 2316306
    num_examples: 8634
  - name: validation
    num_bytes: 414972
    num_examples: 1531
  - name: test
    num_bytes: 349228
    num_examples: 1272
  download_size: 902284
  dataset_size: 3080506
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
  - split: test
    path: data/test-*
- config_name: intensity
  data_files:
  - split: train
    path: intensity/train-*
  - split: validation
    path: intensity/validation-*
  - split: test
    path: intensity/test-*
---
# Dataset Card for NoReC TSA


## Dataset Description

<!--- **Homepage:**  --->
- **Repository:** 
https://github.com/ltgoslo/norec_tsa
- **Paper:** 
[A Fine-Grained Sentiment Dataset for Norwegian](https://aclanthology.org/2020.lrec-1.618/)

<!---
- **Leaderboard:** 
- **Point of Contact:** 
--->

### Dataset Summary
The dataset contains tokenized Norwegian sentences where each token is tagged for sentiment expressed towards that token. The dataset is derived from the manually annotated [NoReC_fine](https://github.com/ltgoslo/norec_fine) with rich annotations for each sentiment expression in the texts.  
The texts are a subset of the Norewegian Review Corpus  [NoReC](https://github.com/ltgoslo/norec).

### Supported Tasks and Leaderboards
[NorBench](https://github.com/ltgoslo/norbench) provides TSA evaluation scripts using this dataset, and a leaderboard comparing large language models for downstream NLP tasks in Norwegian.  

### Languages
Norwegian: Predominantly Bokmål written variant.

| variant   | split   |   sents |   docs |
|:-----|:--------|--------:|-------:|
| nb   | dev     |    1531 |     44 |
| nb   | test    |    1272 |     47 |
| nb   | train   |    8556 |    323 |
| nn   | train   |      78 |      4 |

## Dataset Structure
The dataset comes in two flavours:
- `default` configuration yields labels with binary Positive / Negative sentiment description
- `intensity` configuration yields labels with additional sentiment intensity, 1: Slight, 2: Standard, and 3: Strong.  

The config is required for accessing the version with intensity. `tsa_data = load_dataset("ltg/norec_tsa", "intensity")`
The dataset comes with predefined train, dev (vallidation) and test splits. 


### Data Instances
Config "default" example instance:
```
{'idx': '701363-08-02',
 'tokens': ['Vi', 'liker', 'det', '.'],
 'tsa_tags': ['O', 'O', 'B-targ-Positive', 'O']}
```
Config "intensity"  example instance:
```
{'idx': '701363-08-02',
 'tokens': ['Vi', 'liker', 'det', '.'],
 'tsa_tags': ['O', 'O', 'B-targ-Positive-2', 'O']}
```



### Data Fields
- idx(str): Unique document-and sentence identifier from [NoReC_fine](https://github.com/ltgoslo/norec_fine). The 6-digit document identifier can also be used to look up the text and its metadata in [NoReC](https://github.com/ltgoslo/norec).
- tokens: (List[str]): List of the tokens in the sentence
- tsa_tags: (List[str]): List of the tags for each token in BIO format. There is no integer representation of these in the dataset.




### Data Splits
```
DatasetDict({
    test: Dataset({
        features: ['idx', 'tokens', 'tsa_tags'],
        num_rows: 1272
    })
    train: Dataset({
        features: ['idx', 'tokens', 'tsa_tags'],
        num_rows: 8634
    })
    validation: Dataset({
        features: ['idx', 'tokens', 'tsa_tags'],
        num_rows: 1531
    })
})
```

## Dataset Creation

### Curation Rationale
The sentiment expressions and targets are annotated in NoReC_fine according to its  [annotation guidelines](https://github.com/ltgoslo/norec_fine/blob/master/annotation_guidelines/guidelines.md)

Since a sentiment target may be the target of several sentiment expressions, these are resolved to a final sentiment polarity (and intensity) using the conversion script in [NoReC_tsa](https://github.com/ltgoslo/norec_tsa). There is no "mixed" sentiment category. When a target is the receiver of both positive and negative sentiment, the strongest wins. If a tie, the last sentiment wins. 

### Source Data
A subset of the Norwegian Review Corpus with its sources and preprocessing described [here](https://github.com/ltgoslo/norec).

<!---
#### Initial Data Collection and Normalization

[More Information Needed]

#### Who are the source language producers?

[More Information Needed]

### Annotations

#### Annotation process

[More Information Needed]

#### Who are the annotators?

[More Information Needed]

### Personal and Sensitive Information

[More Information Needed]

## Considerations for Using the Data

### Social Impact of Dataset

[More Information Needed]
--->
### Discussion of Biases
The professional review texts in NoReC that NoReC_tsa is a subset from, are from a set number of Norwegian Publishing channels and from a set timespan that can be explored in the [NoReC metadata](https://raw.githubusercontent.com/ltgoslo/norec/master/data/metadata.json). Both language usage and sentiments expressed could have been more diverse with a more diverse set of source texts.

<!---
### Other Known Limitations

[More Information Needed]

## Additional Information

### Dataset Curators

[More Information Needed]
--->
### Licensing Information

The data, being derived from [NoReC](https://github.com/ltgoslo/norec), is distributed under a Creative Commons Attribution-NonCommercial licence (CC BY-NC 4.0), access the full license text here: https://creativecommons.org/licenses/by-nc/4.0/

The licence is motivated by the need to block the possibility of third parties redistributing the orignal reviews for commercial purposes. Note that machine learned models, extracted lexicons, embeddings, and similar resources that are created on the basis of NoReC are not considered to contain the original data and so can be freely used also for commercial purposes despite the non-commercial condition.


### Citation Information

```bibtex
@InProceedings{OvrMaeBar20,   
author = {Lilja Øvrelid and Petter Mæhlum and Jeremy Barnes and Erik Velldal},   
title = {A Fine-grained Sentiment Dataset for {N}orwegian}, 
booktitle = {{Proceedings of the 12th Edition of the Language Resources and Evaluation Conference}},  
year = 2020,  
address = {Marseille, France, 2020} 
}
```

<!---
### Contributions

[More Information Needed]
--->