Datasets:
Non Matching Splits Sizes Error
#11
by
JingfanKe
- opened
Problem Description: dataset info mismatch
First, I used huggingface_hub.snapshot_download() to download this repo, and find the dataset file URL https://datashare.is.ed.ac.uk/bitstream/handle/10283/3443/VCTK-Corpus-0.92.zip
in vctk.py
line 39.
Second, I used wget
to download the dataset file into local path CSTR-Edinburgh/vctk/data/VCTK-Corpus-0.92.zip
, and modified the vctk.py
line 39 to the real path of this zip file.
Third, I used vctk = load_dataset(path=dataset_path, cache_dir=cache_dir, trust_remote_code=True)
in another python file to process this dataset, but the error occurred.
Generating train split: 5%|βββββββββββ | 4345/88156 [00:53<17:11, 81.22 examples/s]
Traceback (most recent call last):
File "xxxxxxxxx.py", line 144, in <module>
vctk = load_dataset(path=dataset_path, cache_dir=cache_dir, trust_remote_code=True)
File "/opt/conda/envs/py39/lib/python3.9/site-packages/datasets/load.py", line 2614, in load_dataset
builder_instance.download_and_prepare(
File "/opt/conda/envs/py39/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/opt/conda/envs/py39/lib/python3.9/site-packages/datasets/builder.py", line 1789, in _download_and_prepare
super()._download_and_prepare(
File "/opt/conda/envs/py39/lib/python3.9/site-packages/datasets/builder.py", line 1140, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/opt/conda/envs/py39/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 101, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=40103111, num_examples=88156, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=2477106, num_examples=4345, shard_lengths=None, dataset_name='vctk')}]
Is there any wrong with my usage or the problem is caused by the zip file?