Datasets:
metadata
license: cc-by-4.0
task_categories:
- automatic-speech-recognition
- text-to-speech
language:
- vi
pretty_name: InfoRe Technology public dataset №2
size_categories:
- 100K<n<1M
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 55377534543.241
num_examples: 315449
download_size: 46594653323
dataset_size: 55377534543.241
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
unofficial mirror of InfoRe Technology public dataset №2
official announcement: https://www.facebook.com/groups/j2team.community/permalink/1010834009248719/
415h, 315k samples, vietnamese audiobooks of chinese wǔxiá 武俠 & xiānxiá 仙俠
bộ dữ liệu bóc ra từ YouTube đọc truyện võ hiệp & tiên hiệp, áp dụng kĩ thuật đối chiếu văn bản để dán nhãn tự động
official download: magnet:?xt=urn:btih:41f1290325ecb6f1230ecdff2441527c9cd43fd0&dn=audiobooks.zip&tr=http%3A%2F%2Foffice.socials.vn%3A8725%2Fannounce
mirror: https://files.huylenguyen.com/audiobooks.zip
unzip password: BroughtToYouByInfoRe
pre-process: none
need to do: check misspelling
usage with HuggingFace:
# pip install -q "datasets[audio]"
from datasets import load_dataset
from torch.utils.data import DataLoader
dataset = load_dataset("doof-ferb/infore2_audiobooks", split="train", streaming=True)
dataset.set_format(type="torch", columns=["audio", "transcription"])
dataloader = DataLoader(dataset, batch_size=4)