AB
stringclasses 1
value | CS
stringclasses 1
value | HS
stringclasses 1
value | MV
stringclasses 1
value | NS
stringclasses 1
value | NW
stringclasses 1
value | PC
stringclasses 1
value | TT
stringclasses 1
value |
---|---|---|---|---|---|---|---|
https://huggingface.co/datasets/ganga4364/benchmark-stt-AB | https://huggingface.co/datasets/ganga4364/benchmark-stt-CS | https://huggingface.co/datasets/ganga4364/benchmark-stt-HS | https://huggingface.co/datasets/ganga4364/benchmark-stt-MV | https://huggingface.co/datasets/ganga4364/benchmark-stt-NS | https://huggingface.co/datasets/ganga4364/benchmark-stt-NW | https://huggingface.co/datasets/ganga4364/benchmark-stt-PC | https://huggingface.co/datasets/ganga4364/benchmark-stt-TT |
π Benchmark STT Datasets
This repository contains benchmark Speech-to-Text (STT) datasets designed for fair and consistent evaluation of STT models. The datasets are hosted on Hugging Face, and each one is carefully curated to ensure balanced sampling across departments and attributes. Below, youβll find links to the datasets, a summary of their benchmarks, and details about the methodology used for their preparation.
π Datasets Overview
Dataset | Description | Link |
---|---|---|
AB | Audiobook dataset with attributes like age, gender, and music. | AB Dataset |
CS | Children speech recordings with features like speaker ID and microphone type. | CS Dataset |
HS | Historical speeches dataset with details like publishing year and place of origin. | HS Dataset |
MV | Movie audio data with attributes like subtitles. | MV Dataset |
NS | Natural speech recordings with attributes like location and speaker names. | NS Dataset |
NW | News dataset with features like media organization and audio quality. | NW Dataset |
PC | Podcast recordings with attributes like publishing year and gender. | PC Dataset |
TT | Tibetan Teachings and talks dataset with topics and episodes. | TT Dataset |
π Benchmark Summary
Each dataset contains ~10,000 samples, distributed proportionally among its attributes and categories. For categories or speakers with limited data, up to 30% of their data was used to maintain a balance between benchmark and training datasets.
AB (Audiobook)
- Age: 1680 rows
- Gender: 2000 rows
- Music: 2000 rows
- Name: 1607 rows
CS (Children speech)
- Recording Type: 1707 rows
- Class: 319 rows
- Gender: 1526 rows
- Speakers ID: 1807 rows
- Microphone Type: 1675 rows
- School: 1316 rows
NS (Natural speech)
- Mic/Phone: 1526 rows
- Location: 1194 rows
- Speaker Name: 1974 rows
- Gender: 1901 rows
- Age: 1925 rows
PC (Podcast)
- Name of Podcast: 1915 rows
- Publishing Year: 1901 rows
- Channel: 1460 rows
- Audio Quality: 2000 rows
- Country: 1332 rows
- Gender: 1611 rows
- Age Group: 1229 rows
HS (Historical Speeches)
- Publishing Year: 1968 rows
- Name: 1970 rows
- Gender: 2000 rows
- Age Group: 1397 rows
- Place of Origin: 1363 rows
- Exiled Year: 1236 rows
NW (News)
- Name of News: 1901 rows
- Publishing Year: 1755 rows
- Media Organization: 1374 rows
- Audio Quality: 1710 rows
- Single Speaker: 2000 rows
- Country: 1123 rows
MV (Movies)
- Name of the Movie: 1776 rows
- Subtitle: 1491 rows
TT (Tibetan Teachings & Talks)
- Teacher: 1995 rows
- Topic: 1888 rows
- Books: 348 rows
- Episode: 1831 rows
π Methodology
The benchmarks were created with the following principles:
- Proportional Sampling: Approximately 10,000 samples per department, distributed equally across unique values in each column.
- Fair Representation: Data from each unique value (e.g., speaker, location) was proportionally sampled. For categories with limited data, up to 30% of the available data was used.
- Consistency: Balanced sampling was maintained to avoid bias in benchmark datasets while preserving the majority of the data for training.
- Exclusions: Categories or speakers with extremely low data availability were excluded.
πΎ How to Download Datasets
You can download all datasets programmatically using the script below:
from datasets import load_dataset
import json
# Load dataset metadata
datasets = {
"AB": "ganga4364/benchmark-stt-AB",
"CS": "ganga4364/benchmark-stt-CS",
"HS": "ganga4364/benchmark-stt-HS",
"MV": "ganga4364/benchmark-stt-MV",
"NS": "ganga4364/benchmark-stt-NS",
"NW": "ganga4364/benchmark-stt-NW",
"PC": "ganga4364/benchmark-stt-PC",
"TT": "ganga4364/benchmark-stt-TT"
}
# Download and save datasets
for name, url in datasets.items():
print(f"Downloading {name} dataset...")
dataset = load_dataset(url)
dataset.save_to_disk(f"./downloaded_datasets/{name}")
- Downloads last month
- 9