Dataset does not load: it needs a loading script
This dataset does not load when running:
from datasets import load_dataset
ds = load_dataset("Tylersuard/PathfinderX2")
Please note that currently, for image data-only datasets (we call them "no-code" as no Python script is present) we only support image classification task, with our ImageFolder
builder (label names are extracted from the directory names). This is what our library tries to apply to your data files, which is not applicable.
For more complex image tasks, you need to implement a Python script to parse the data files within your ZIP archive.
You can check our docs:
- Create an image dataset: Loading script
You can also find some implementations of other image tasks in our docs: - Depth estimation
- Semantic segmentation
- And the corresponding Python script for "scene_parsing": https://huggingface.co/datasets/scene_parse_150/blob/main/scene_parse_150.py
@albertvillanova
Thank you for your help on this. I am looking to do segmentation, but it looks like the scene parse script may be a little more heavy-duty than what I need. For example, see below. The first image is the input and the second image is the target or label. I have both images in the dataset so i don't need to generate anything. Do you have a simple script for a dataset like this?
@albertvillanova I created the script here https://huggingface.co/datasets/Tylersuard/PathfinderX2/blob/main/PathfinderX2.py as per instructions. When I try to run load_dataset("Tylersuard/PathfinderX2") I get the following error:
ValueError Traceback (most recent call last)
/usr/local/lib/python3.9/dist-packages/datasets/builder.py in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)
1609 _time = time.time()
-> 1610 for key, record in generator:
1611 if max_shard_size is not None and writer._num_bytes > max_shard_size:
7 frames
~/.cache/huggingface/modules/datasets_modules/datasets/Tylersuard--PathfinderX2/fe4967f9f9ec26732b0d1d5c893f3c84782a2297295d052f6dbdb64fe83dcdfd/PathfinderX2.py in _generate_examples(self, data, split)
107 if split == "testing":
--> 108 for idx, (path, file) in enumerate(data):
109 if path.endswith(".png"):
ValueError: too many values to unpack (expected 2)
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
in <cell line: 3>()
1 from datasets import load_dataset
2
----> 3 ds = load_dataset("Tylersuard/PathfinderX2")
/usr/local/lib/python3.9/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)
1789
1790 # Download and prepare data
-> 1791 builder_instance.download_and_prepare(
1792 download_config=download_config,
1793 download_mode=download_mode,
/usr/local/lib/python3.9/dist-packages/datasets/builder.py in download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
889 if num_proc is not None:
890 prepare_split_kwargs["num_proc"] = num_proc
--> 891 self._download_and_prepare(
892 dl_manager=dl_manager,
893 verification_mode=verification_mode,
/usr/local/lib/python3.9/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs)
1649
1650 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs):
-> 1651 super()._download_and_prepare(
1652 dl_manager,
1653 verification_mode,
/usr/local/lib/python3.9/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
984 try:
985 # Prepare split will record examples associated to the split
--> 986 self._prepare_split(split_generator, **prepare_split_kwargs)
987 except OSError as e:
988 raise OSError(
/usr/local/lib/python3.9/dist-packages/datasets/builder.py in _prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size)
1488 gen_kwargs = split_generator.gen_kwargs
1489 job_id = 0
-> 1490 for job_id, done, content in self._prepare_split_single(
1491 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
1492 ):
/usr/local/lib/python3.9/dist-packages/datasets/builder.py in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)
1644 if isinstance(e, SchemaInferenceError) and e.context is not None:
1645 e = e.context
-> 1646 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1647
1648 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
Hi, your error comes from these code lines:
~/.cache/huggingface/modules/datasets_modules/datasets/Tylersuard--PathfinderX2/fe4967f9f9ec26732b0d1d5c893f3c84782a2297295d052f6dbdb64fe83dcdfd/PathfinderX2.py in _generate_examples(self, data, split)
107 if split == "testing":
--> 108 for idx, (path, file) in enumerate(data):
109 if path.endswith(".png"):
ValueError: too many values to unpack (expected 2)
Please note that your data
variable is a tuple with 2 elements and when iterating over it, you iterate over each of its 2 elements. You should first unpack the tuple elements and then iterate (as you did some lines below):
images, annotations = data
Also note that maybe images and annotations are not properly aligned when iterating over them...
Ok! It runs now, with no errors. Thank you for your help!
However, my dataset is still not showing in the previewer.
Hi @Tylersuard , please note that the issue is not with the viewer but with your script: did you manage to load your dataset? I don't think so...
from datasets import load_dataset
ds = load_dataset("Tylersuard/PathfinderX2")
gives an empty dataset (with 0 rows):
In [5]: ds
Out[5]:
DatasetDict({
train: Dataset({
features: ['image', 'annotation'],
num_rows: 0
})
test: Dataset({
features: ['image', 'annotation'],
num_rows: 0
})
validation: Dataset({
features: ['image', 'annotation'],
num_rows: 0
})
})
You can test it locally in your computer as well, so that you can easily debug it:
ds = load_dataset("/path/to/your/script/PathfinderX2.py")
Please note that your image_id2annot
is empty because the condition if split in path_annot
is never True:
- Example of
path_annot
:"segs/0.png"
Ah ok I understand now. My dataset does not have any splits. Is that ok, or do I need to create the splits in the folders? Thank you for your help.
Ok so I got it to download and extract:
DatasetDict({
train: Dataset({
features: ['image', 'annotation'],
num_rows: 200000
})
})
However, it is still not showing in my previewer.
Never mind, everything works now, thank you for your help!