issue: cannot load dataset

#2
by MartinoMensio - opened

Hello,
I'm having some problems loading this dataset. I first thought that the huggingface cache was broken, cleaned it and re-run, but I still get the same issue.

from datasets import load_dataset
dataset = load_dataset("PleIAs/Spanish-PD-Books")

System:

  • datasets: Version: 3.2.0
  • python: 3.11.3

Thank you in advance for any workaround.

Full stacktrace

Repo card metadata block was not found. Setting CardData to empty.
Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 129/129 [00:00<00:00, 398721.60it/s]
Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 129/129 [00:00<00:00, 583.22files/s]
Generating train split: 2585 examples [00:03, 785.88 examples/s]Failed to read file '/gpfs/work5/0/prjs0986/.hf_cache_dir/hub/datasets--PleIAs--Spanish-PD-Books/snapshots/001eaf13681f483069361dd82195ce279e12ed63/spanish_pd_100.parquet' with error <class 'datasets.table.CastError'>: Couldn't cast
directory: string
identifier: string
...1: int64
creator: string
language: string
title: string
publication_date: int64
lang: string
real_lang: string
n: int64
rights: string
file: string
word_count: int64
text: string
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 1844
to
{'identifier': Value(dtype='string', id=None), 'creator': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'publication_date': Value(dtype='string', id=None), 'word_count': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), '__index_level_0__': Value(dtype='int64', id=None)}
because column names don't match
Generating train split: 2585 examples [00:06, 409.80 examples/s]
Traceback (most recent call last):
  File "/gpfs/home2/mmensio/code/machine-translation/venv/lib/python3.11/site-packages/datasets/builder.py", line 1854, in _prepare_split_single
    for _, table in generator:
  File "/gpfs/home2/mmensio/code/machine-translation/venv/lib/python3.11/site-packages/datasets/packaged_modules/parquet/parquet.py", line 106, in _generate_tables
    yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/gpfs/home2/mmensio/code/machine-translation/venv/lib/python3.11/site-packages/datasets/packaged_modules/parquet/parquet.py", line 73, in _cast_table
    pa_table = table_cast(pa_table, self.info.features.arrow_schema)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/gpfs/home2/mmensio/code/machine-translation/venv/lib/python3.11/site-packages/datasets/table.py", line 2292, in table_cast
    return cast_table_to_schema(table, schema)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/gpfs/home2/mmensio/code/machine-translation/venv/lib/python3.11/site-packages/datasets/table.py", line 2240, in cast_table_to_schema
    raise CastError(
datasets.table.CastError: Couldn't cast
directory: string
identifier: string
...1: int64
creator: string
language: string
title: string
publication_date: int64
lang: string
real_lang: string
n: int64
rights: string
file: string
word_count: int64
text: string
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 1844
to
{'identifier': Value(dtype='string', id=None), 'creator': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'publication_date': Value(dtype='string', id=None), 'word_count': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), '__index_level_0__': Value(dtype='int64', id=None)}
because column names don't match

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/gpfs/home2/mmensio/code/machine-translation/venv/lib/python3.11/site-packages/datasets/load.py", line 2151, in load_dataset
    builder_instance.download_and_prepare(
  File "/gpfs/home2/mmensio/code/machine-translation/venv/lib/python3.11/site-packages/datasets/builder.py", line 924, in download_and_prepare
    self._download_and_prepare(
  File "/gpfs/home2/mmensio/code/machine-translation/venv/lib/python3.11/site-packages/datasets/builder.py", line 1000, in _download_and_prepare
    self._prepare_split(split_generator, **prepare_split_kwargs)
  File "/gpfs/home2/mmensio/code/machine-translation/venv/lib/python3.11/site-packages/datasets/builder.py", line 1741, in _prepare_split
    for job_id, done, content in self._prepare_split_single(
  File "/gpfs/home2/mmensio/code/machine-translation/venv/lib/python3.11/site-packages/datasets/builder.py", line 1897, in _prepare_split_single
    raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Sign up or log in to comment