dataset_info:
- config_name: split-channel
features:
- name: audio
dtype: audio
- name: start_timestamp
dtype: string
- name: start_time_s
dtype: float32
- name: start_frame
dtype: uint64
- name: end_timestamp
dtype: string
- name: end_time_s
dtype: float32
- name: end_frame
dtype: uint64
- name: duration_s
dtype: float32
- name: duration_frames
dtype: uint64
- name: transcription
dtype: string
- name: mother_tongue
dtype: string
- name: participant_id
dtype: string
- name: session_id
dtype: string
- name: device_id
dtype: string
- name: device_channel
dtype: uint8
- name: device_distance_mm
dtype: uint16
- name: device_type
dtype:
class_label:
names:
'0': close-talk
'1': far-field
- name: gender
dtype:
class_label:
names:
'0': female
'1': male
- name: nativeness
dtype:
class_label:
names:
'0': native
'1': non-native
splits:
- name: train
num_bytes: 13863370976.5
num_examples: 132228
- name: test
num_bytes: 13192103916.5
num_examples: 122580
download_size: 23859943038
dataset_size: 27055474893
- config_name: mixed-channel
features:
- name: audio
dtype: audio
- name: start_timestamp
dtype: string
- name: start_time_s
dtype: float32
- name: start_frame
dtype: uint64
- name: end_timestamp
dtype: string
- name: end_time_s
dtype: float32
- name: end_frame
dtype: uint64
- name: duration_s
dtype: float32
- name: duration_frames
dtype: uint64
- name: transcription
dtype: string
- name: mother_tongue
dtype: string
- name: participant_id
dtype: string
- name: session_id
dtype: string
- name: device_id
dtype: string
- name: device_channel
dtype: uint8
- name: device_distance_mm
dtype: uint16
- name: device_type
dtype:
class_label:
names:
'0': close-talk
'1': far-field
- name: gender
dtype:
class_label:
names:
'0': female
'1': male
- name: nativeness
dtype:
class_label:
names:
'0': native
'1': non-native
splits:
- name: train
num_bytes: 2310562016.25
num_examples: 22038
- name: test
num_bytes: 2198683986.25
num_examples: 20430
download_size: 3840697632
dataset_size: 4509246002.5
configs:
- config_name: split-channel
data_files:
- split: train
path: split-channel/train-*
- split: test
path: split-channel/test-*
- config_name: mixed-channel
data_files:
- split: train
path: mixed-channel/train-*
- split: test
path: mixed-channel/test-*
license: cdla-permissive-1.0
task_categories:
- automatic-speech-recognition
- audio-classification
language:
- en
tags:
- dinner party
- dipco
pretty_name: DiPCo - Dinner Party Corpus
This repository contains a reorganized, utterance-focused version of the Dinner Party Corpus, released by Amazon, the Center for Language and Speech Processing (CLSP) and Johns Hopkins University in September 2019.
Description
The following description is provided in arXiv 1909.13447:
We present a speech data corpus that simulates a "dinner party" scenario taking place in an everyday home environment. The corpus was created by recording multiple groups of four Amazon employee volunteers having a natural conversation in English around a dining table. The participants were recorded by a single-channel close-talk microphone and by five far-field 7-microphone array devices positioned at different locations in the recording room. The dataset contains the audio recordings and human labeled transcripts of a total of 10 sessions with a duration between 15 and 45 minutes. The corpus was created to advance in the field of noise robust and distant speech processing and is intended to serve as a public research and benchmarking data set.
License
As stated in the paper linked above, section 4, the dataset is released under the CDLA-Permissive license.
Authors
Van Segbroeck, Maarten; Zaid, Ahmed; Kutsenko, Ksenia; Huerta, Cirenia; Nguyen, Tinh; Luo, Xuewen; Hoffmeister, Björn; Trmal, Jan; Omologo, Maurizio; Maas, Roland
Contact Persons
Maas, Roland; Hoffmeister, Björn
Comparison to Base Dataset
- The base dataset was downloaded from Zenodo, this has a compressed size of 12.4GB, and an uncompressed size of 23GB. It is organized in manner to minimize file size and data repetition, with uncut audio and separate label files.
- This dataset has an uncompressed size of 27GB, making it about 15% larger than the uncompressed base dataset. For this size exchange, you gain ease-of-use; all audio is pre-cut to the start and end utterances, and mapped with the appropriate labels directly in Parquet.
How to Use
This repository is made to be used with 🤗Datasets.
from datasets import load_dataset
dataset = load_dataset(
"benjamin-paine/dinner-party-corpus",
config_name="split-channel", # 'split-channel' or 'mixed-channel'
split="train" # 'train' or 'test'
)
for datum in dataset:
# Do something with the audio
# datum["audio"]["array"] is the sample waveform at 16khz (see datum["audio"]["sampling_rate"])
pass
Conversion Script
The script used to convert the data is available in this repository as convert.py.
Citation
@misc{vansegbroeck2019dipcodinnerparty,
title={DiPCo -- Dinner Party Corpus},
author={Maarten Van Segbroeck and Ahmed Zaid and Ksenia Kutsenko and Cirenia Huerta and Tinh Nguyen and Xuewen Luo and Björn Hoffmeister and Jan Trmal and Maurizio Omologo and Roland Maas},
year={2019},
eprint={1909.13447},
archivePrefix={arXiv},
primaryClass={eess.AS},
url={https://arxiv.org/abs/1909.13447},
}