Datasets:

Modalities:
Audio
Text
Formats:
parquet
Languages:
Danish
Libraries:
Datasets
Dask
License:
File size: 3,022 Bytes
5c52618
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1ce98a9
 
 
 
 
 
 
 
5c52618
1ce98a9
 
 
 
 
 
 
 
 
 
 
 
 
3dd0718
1ce98a9
 
 
8cef8c6
1ce98a9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
---
dataset_info:
  features:
  - name: speaker_id
    dtype: string
  - name: transcription_id
    dtype: int64
  - name: text
    dtype: string
  - name: audio
    dtype:
      audio:
        sampling_rate: 44100
  splits:
  - name: train
    num_bytes: 12163543668.45736
    num_examples: 18863
  download_size: 10460673849
  dataset_size: 12163543668.45736
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
license: cc0-1.0
task_categories:
- text-to-speech
language:
- da
pretty_name: CoRal TTS
size_categories:
- 10K<n<100K
---

# Dataset Card for CoRal TTS

## Dataset Description

- **Repository:** <https://github.com/alexandrainst/coral>
- **Point of Contact:** [Dan Saattrup Nielsen](mailto:[email protected])
- **Size of downloaded dataset files:** 14.63 GB
- **Size of the generated dataset:** 15.25 GB
- **Total amount of disk used:** 29.88 GB

### Dataset Summary

This dataset consists of two professional Danish speakers, female and male, recording roughly 17 hours of Danish speech each.

The dataset is part of the [CoRal project](https://alexandra.dk/coral/) which is funded by the [Danish Innovation Fund](https://innovationsfonden.dk/en).

The text data was selected by the [Alexandra Institute](https://alexandra.dk/about-the-alexandra-institute/) ([Github repo for the dataset creation](https://github.com/alexandrainst/tts_text)) and consists of sentences from [sundhed.dk](https://sundhed.dk/), [borger.dk](https://borger.dk/), names of bus stops and stations, manually filtered Reddit comments, and dates and times.

The audio data was recorded by the public institution [Nota](https://nota.dk/), which is part of the Danish Ministry of Culture.


### Supported Tasks and Leaderboards

Speech synthesis is the intended tasks for this dataset. No leaderboard is active at this point.


### Languages

The dataset is available in Danish (`da`).


## Dataset Structure

### Data Instances

- **Size of downloaded dataset files:** 14.63 GB
- **Size of the generated dataset:** 15.25 GB
- **Total amount of disk used:** 29.88 GB

An example from the dataset looks as follows.
```
{
 'speaker_id': 'mic',
 'transcription_id': 0,
 'text': '26 rigtige.',
 'audio': {
  'path': 'mic_00001.wav',
  'array': array([-0.00054932, -0.00054932, -0.00061035, ...,  0.00027466,
                   0.00036621,  0.00030518]),
  'sampling_rate': 44100
 }
}
```

### Data Fields

The data fields are the same among all splits.

- `speaker_id`: a `string` feature.
- `transcription_id`: an `int` feature.
- `text`: a `string` feature.
- `audio`: an `Audio` feature.


### Dataset Statistics

There are 18,863 samples in the dataset.


## Additional Information

### Dataset Curators

[Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra
Institute](https://alexandra.dk/) uploaded it to the Hugging Face Hub.

### Licensing Information

The dataset is licensed under the [CC0
license](https://creativecommons.org/share-your-work/public-domain/cc0/).