Unnamed: 0
int64
0
299
url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
1.41B
1.47B
node_id
stringlengths
18
19
number
int64
5.12k
5.32k
title
stringlengths
4
120
user
stringlengths
904
1.05k
labels
stringclasses
13 values
state
stringclasses
2 values
locked
bool
1 class
assignee
stringclasses
10 values
assignees
stringclasses
11 values
milestone
stringclasses
1 value
comments
int64
0
26
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
closed_at
stringlengths
14
20
author_association
stringclasses
3 values
active_lock_reason
stringclasses
1 value
draft
stringclasses
3 values
pull_request
stringlengths
14
315
body
stringlengths
10
33.9k
reactions
stringlengths
194
194
timeline_url
stringlengths
70
70
performed_via_github_app
stringclasses
1 value
state_reason
stringclasses
2 values
100
https://api.github.com/repos/huggingface/datasets/issues/5319
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5319/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5319/comments
https://api.github.com/repos/huggingface/datasets/issues/5319/events
https://github.com/huggingface/datasets/pull/5319
1,470,945,515
PR_kwDODunzps5ECkfc
5,319
Fix Text sample_by paragraph
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}
[]
open
false
No information
[]
No information
0
2022-12-01T09:08:09Z
2022-12-01T09:08:09Z
No information
MEMBER
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5319', 'html_url': 'https://github.com/huggingface/datasets/pull/5319', 'diff_url': 'https://github.com/huggingface/datasets/pull/5319.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5319.patch', 'merged_at': None}
Fix #5316.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5319/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5319/timeline
No information
No information
101
https://api.github.com/repos/huggingface/datasets/issues/5318
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5318/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5318/comments
https://api.github.com/repos/huggingface/datasets/issues/5318/events
https://github.com/huggingface/datasets/pull/5318
1,470,749,750
PR_kwDODunzps5EB6RM
5,318
Origin/fix missing features error
{'login': 'eunseojo', 'id': 12104720, 'node_id': 'MDQ6VXNlcjEyMTA0NzIw', 'avatar_url': 'https://avatars.githubusercontent.com/u/12104720?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/eunseojo', 'html_url': 'https://github.com/eunseojo', 'followers_url': 'https://api.github.com/users/eunseojo/followers', 'following_url': 'https://api.github.com/users/eunseojo/following{/other_user}', 'gists_url': 'https://api.github.com/users/eunseojo/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/eunseojo/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/eunseojo/subscriptions', 'organizations_url': 'https://api.github.com/users/eunseojo/orgs', 'repos_url': 'https://api.github.com/users/eunseojo/repos', 'events_url': 'https://api.github.com/users/eunseojo/events{/privacy}', 'received_events_url': 'https://api.github.com/users/eunseojo/received_events', 'type': 'User', 'site_admin': False}
[]
open
false
No information
[]
No information
1
2022-12-01T06:18:39Z
2022-12-01T06:32:31Z
No information
NONE
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5318', 'html_url': 'https://github.com/huggingface/datasets/pull/5318', 'diff_url': 'https://github.com/huggingface/datasets/pull/5318.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5318.patch', 'merged_at': None}
No information
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5318/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5318/timeline
No information
No information
102
https://api.github.com/repos/huggingface/datasets/issues/5317
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5317/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5317/comments
https://api.github.com/repos/huggingface/datasets/issues/5317/events
https://github.com/huggingface/datasets/issues/5317
1,470,390,164
I_kwDODunzps5XpF-U
5,317
`ImageFolder` performs poorly with large datasets
{'login': 'salieri', 'id': 1086393, 'node_id': 'MDQ6VXNlcjEwODYzOTM=', 'avatar_url': 'https://avatars.githubusercontent.com/u/1086393?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/salieri', 'html_url': 'https://github.com/salieri', 'followers_url': 'https://api.github.com/users/salieri/followers', 'following_url': 'https://api.github.com/users/salieri/following{/other_user}', 'gists_url': 'https://api.github.com/users/salieri/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/salieri/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/salieri/subscriptions', 'organizations_url': 'https://api.github.com/users/salieri/orgs', 'repos_url': 'https://api.github.com/users/salieri/repos', 'events_url': 'https://api.github.com/users/salieri/events{/privacy}', 'received_events_url': 'https://api.github.com/users/salieri/received_events', 'type': 'User', 'site_admin': False}
[]
open
false
No information
[]
No information
0
2022-12-01T00:04:21Z
2022-12-01T00:04:47Z
No information
NONE
No information
No information
No information
### Describe the bug While testing image dataset creation, I'm seeing significant performance bottlenecks with imagefolders when scanning a directory structure with large number of images. ## Setup * Nested directories (5 levels deep) * 3M+ images * 1 `metadata.jsonl` file ## Performance Degradation Point 1 Degradation occurs because [`get_data_files_patterns`](https://github.com/huggingface/datasets/blob/main/src/datasets/data_files.py#L231-L243) runs the exact same scan for many different types of patterns, and there doesn't seem to be a way to easily limit this. It's controlled by the definition of [`ALL_DEFAULT_PATTERNS`](https://github.com/huggingface/datasets/blob/main/src/datasets/data_files.py#L82-L85). One scan with 3M+ files takes about 10-15 minutes to complete on my setup, so having those extra scans really slows things down – from 10 minutes to 60+. Most of the scans return no matches, but they still take a significant amount of time to complete – hence the poor performance. As a side effect, when this scan is run on 3M+ image files, Python also consumes up to 12 GB of RAM, which is not ideal. ## Performance Degradation Point 2 The second performance bottleneck is in [`PackagedDatasetModuleFactory.get_module`](https://github.com/huggingface/datasets/blob/d7dfbc83d68e87ba002c5eb2555f7a932e59038a/src/datasets/load.py#L707-L711), which calls `DataFilesDict.from_local_or_remote`. It runs for a long time (60min+), consuming significant amounts of RAM – even more than the point 1 above. Based on `iostat -d 2`, it performs **zero** disk operations, which to me suggests that there is a code based bottleneck there that could be sorted out. ### Steps to reproduce the bug ```python from datasets import load_dataset import os import huggingface_hub dataset = load_dataset( 'imagefolder', data_dir='/some/path', # just to spell it out: split=None, drop_labels=True, keep_in_memory=False ) dataset.push_to_hub('account/dataset', private=True) ``` ### Expected behavior While it's certainly possible to write a custom loader to replace `ImageFolder` with, it'd be great if the off-the-shelf `ImageFolder` would by default have a setup that can scale to large datasets. Or perhaps there could be a dedicated loader just for large datasets that trades off flexibility for performance? As in, maybe you have to define explicitly how you want it to work rather than it trying to guess your data structure like `_get_data_files_patterns()` does? ### Environment info - `datasets` version: 2.7.1 - Platform: Linux-4.14.296-222.539.amzn2.x86_64-x86_64-with-glibc2.2.5 - Python version: 3.7.10 - PyArrow version: 10.0.1 - Pandas version: 1.3.5
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5317/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5317/timeline
No information
No information
103
https://api.github.com/repos/huggingface/datasets/issues/5316
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5316/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5316/comments
https://api.github.com/repos/huggingface/datasets/issues/5316/events
https://github.com/huggingface/datasets/issues/5316
1,470,115,681
I_kwDODunzps5XoC9h
5,316
Bug in sample_by="paragraph"
{'login': 'adampauls', 'id': 1243668, 'node_id': 'MDQ6VXNlcjEyNDM2Njg=', 'avatar_url': 'https://avatars.githubusercontent.com/u/1243668?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/adampauls', 'html_url': 'https://github.com/adampauls', 'followers_url': 'https://api.github.com/users/adampauls/followers', 'following_url': 'https://api.github.com/users/adampauls/following{/other_user}', 'gists_url': 'https://api.github.com/users/adampauls/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/adampauls/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/adampauls/subscriptions', 'organizations_url': 'https://api.github.com/users/adampauls/orgs', 'repos_url': 'https://api.github.com/users/adampauls/repos', 'events_url': 'https://api.github.com/users/adampauls/events{/privacy}', 'received_events_url': 'https://api.github.com/users/adampauls/received_events', 'type': 'User', 'site_admin': False}
[]
open
false
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}
[{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}]
No information
1
2022-11-30T19:24:13Z
2022-12-01T07:58:47Z
No information
NONE
No information
No information
No information
### Describe the bug I think [this line](https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/text/text.py#L96) is wrong and should be `batch = f.read(self.config.chunksize)`. Otherwise it will never terminate because even when `f` is finished reading, `batch` will still be truthy from the last iteration. ### Steps to reproduce the bug ``` > cat test.txt a b c d e f ```` ```python >>> import datasets >>> datasets.load_dataset("text", data_files={"train":"test.txt"}, sample_by="paragraph") ``` This will go on forever. ### Expected behavior Terminates very quickly. ### Environment info `version = "2.6.1"` but I think the bug is still there on main.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5316/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5316/timeline
No information
No information
104
https://api.github.com/repos/huggingface/datasets/issues/5315
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5315/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5315/comments
https://api.github.com/repos/huggingface/datasets/issues/5315/events
https://github.com/huggingface/datasets/issues/5315
1,470,026,797
I_kwDODunzps5XntQt
5,315
Adding new splits to a dataset script with existing old splits info in metadata's `dataset_info` fails
{'login': 'polinaeterna', 'id': 16348744, 'node_id': 'MDQ6VXNlcjE2MzQ4NzQ0', 'avatar_url': 'https://avatars.githubusercontent.com/u/16348744?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/polinaeterna', 'html_url': 'https://github.com/polinaeterna', 'followers_url': 'https://api.github.com/users/polinaeterna/followers', 'following_url': 'https://api.github.com/users/polinaeterna/following{/other_user}', 'gists_url': 'https://api.github.com/users/polinaeterna/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/polinaeterna/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/polinaeterna/subscriptions', 'organizations_url': 'https://api.github.com/users/polinaeterna/orgs', 'repos_url': 'https://api.github.com/users/polinaeterna/repos', 'events_url': 'https://api.github.com/users/polinaeterna/events{/privacy}', 'received_events_url': 'https://api.github.com/users/polinaeterna/received_events', 'type': 'User', 'site_admin': False}
[{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}]
open
false
{'login': 'polinaeterna', 'id': 16348744, 'node_id': 'MDQ6VXNlcjE2MzQ4NzQ0', 'avatar_url': 'https://avatars.githubusercontent.com/u/16348744?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/polinaeterna', 'html_url': 'https://github.com/polinaeterna', 'followers_url': 'https://api.github.com/users/polinaeterna/followers', 'following_url': 'https://api.github.com/users/polinaeterna/following{/other_user}', 'gists_url': 'https://api.github.com/users/polinaeterna/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/polinaeterna/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/polinaeterna/subscriptions', 'organizations_url': 'https://api.github.com/users/polinaeterna/orgs', 'repos_url': 'https://api.github.com/users/polinaeterna/repos', 'events_url': 'https://api.github.com/users/polinaeterna/events{/privacy}', 'received_events_url': 'https://api.github.com/users/polinaeterna/received_events', 'type': 'User', 'site_admin': False}
[{'login': 'polinaeterna', 'id': 16348744, 'node_id': 'MDQ6VXNlcjE2MzQ4NzQ0', 'avatar_url': 'https://avatars.githubusercontent.com/u/16348744?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/polinaeterna', 'html_url': 'https://github.com/polinaeterna', 'followers_url': 'https://api.github.com/users/polinaeterna/followers', 'following_url': 'https://api.github.com/users/polinaeterna/following{/other_user}', 'gists_url': 'https://api.github.com/users/polinaeterna/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/polinaeterna/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/polinaeterna/subscriptions', 'organizations_url': 'https://api.github.com/users/polinaeterna/orgs', 'repos_url': 'https://api.github.com/users/polinaeterna/repos', 'events_url': 'https://api.github.com/users/polinaeterna/events{/privacy}', 'received_events_url': 'https://api.github.com/users/polinaeterna/received_events', 'type': 'User', 'site_admin': False}]
No information
1
2022-11-30T18:02:15Z
2022-12-01T07:00:44Z
No information
CONTRIBUTOR
No information
No information
No information
### Describe the bug If you first create a custom dataset with a specific set of splits, generate metadata with `datasets-cli test ... --save_info`, then change your script to include more splits, it fails. That's what happened in https://huggingface.co/datasets/mrdbourke/food_vision_199_classes/discussions/2#6385fd1269634850f8ddff48. ### Steps to reproduce the bug 1. create a dataset with a custom split that returns, for example, only `"train"` split in `_splits_generators'`. specifically, if really want to reproduce, copy `https://huggingface.co/datasets/mrdbourke/food_vision_199_classes/blob/main/food_vision_199_classes.py 2. run `datasets-cli test dataset_script.py --save_info --all_configs` - this would generate metadata yaml in `README.md` that would contain info about splits, for example, like this: ``` splits: - name: train num_bytes: 2973286 num_examples: 19747 ``` 3. make changes to your script so that it returns another set of splits, for example, `"train"` and `"test"` (uncomment [these lines](https://huggingface.co/datasets/mrdbourke/food_vision_199_classes/blob/main/food_vision_199_classes.py#L271)) 4. run `load_dataset` and get the following error: ```python Traceback (most recent call last): File "/home/daniel/code/pytorch/env/bin/datasets-cli", line 8, in <module> sys.exit(main()) File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/commands/datasets_cli.py", line 39, in main service.run() File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/commands/test.py", line 141, in run builder.download_and_prepare( File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/builder.py", line 822, in download_and_prepare self._download_and_prepare( File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/builder.py", line 1555, in _download_and_prepare super()._download_and_prepare( File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/builder.py", line 913, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/builder.py", line 1356, in _prepare_split split_info = self.info.splits[split_generator.name] File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/splits.py", line 525, in __getitem__ instructions = make_file_instructions( File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/arrow_reader.py", line 111, in make_file_instructions name2filenames = { File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/arrow_reader.py", line 112, in <dictcomp> info.name: filenames_for_dataset_split( File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/naming.py", line 78, in filenames_for_dataset_split prefix = filename_prefix_for_split(dataset_name, split) File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/naming.py", line 57, in filename_prefix_for_split if os.path.basename(name) != name: File "/home/daniel/code/pytorch/env/lib/python3.8/posixpath.py", line 143, in basename p = os.fspath(p) TypeError: expected str, bytes or os.PathLike object, not NoneType ``` 5. bonus: try to regenerate metadata in `README.md` with `datasets-cli` as in step 2 and get the same error. This is because `dataset.info.splits` contains only `"train"` split so when we are doing `self.info.splits[split_generator.name]` it tries to infer smth like `info.splits['train[50%]']` and that's not the case and it fails. ### Expected behavior to be discussed? This can be solved by removing splits information from metadata file first. But I wonder if there is a better way. ### Environment info - Datasets version: 2.7.1 - Python version: 3.8.13
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5315/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5315/timeline
No information
No information
105
https://api.github.com/repos/huggingface/datasets/issues/5314
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5314/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5314/comments
https://api.github.com/repos/huggingface/datasets/issues/5314/events
https://github.com/huggingface/datasets/issues/5314
1,469,685,118
I_kwDODunzps5XmZ1-
5,314
Datasets: classification_report() got an unexpected keyword argument 'suffix'
{'login': 'JonathanAlis', 'id': 42126634, 'node_id': 'MDQ6VXNlcjQyMTI2NjM0', 'avatar_url': 'https://avatars.githubusercontent.com/u/42126634?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/JonathanAlis', 'html_url': 'https://github.com/JonathanAlis', 'followers_url': 'https://api.github.com/users/JonathanAlis/followers', 'following_url': 'https://api.github.com/users/JonathanAlis/following{/other_user}', 'gists_url': 'https://api.github.com/users/JonathanAlis/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/JonathanAlis/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/JonathanAlis/subscriptions', 'organizations_url': 'https://api.github.com/users/JonathanAlis/orgs', 'repos_url': 'https://api.github.com/users/JonathanAlis/repos', 'events_url': 'https://api.github.com/users/JonathanAlis/events{/privacy}', 'received_events_url': 'https://api.github.com/users/JonathanAlis/received_events', 'type': 'User', 'site_admin': False}
[]
open
false
No information
[]
No information
0
2022-11-30T14:01:03Z
2022-11-30T14:01:03Z
No information
NONE
No information
No information
No information
https://github.com/huggingface/datasets/blob/main/metrics/seqeval/seqeval.py > import datasets predictions = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] references = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] seqeval = datasets.load_metric("seqeval") results = seqeval.compute(predictions=predictions, references=references) print(list(results.keys())) print(results["overall_f1"]) print(results["PER"]["f1"]) It raises the error: > TypeError: classification_report() got an unexpected keyword argument 'suffix' For context, versions on my pip list -v > datasets 1.12.1 seqeval 1.2.2
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5314/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5314/timeline
No information
No information
106
https://api.github.com/repos/huggingface/datasets/issues/5313
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5313/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5313/comments
https://api.github.com/repos/huggingface/datasets/issues/5313/events
https://github.com/huggingface/datasets/pull/5313
1,468,484,136
PR_kwDODunzps5D6Qfb
5,313
Fix description of streaming in the docs
{'login': 'polinaeterna', 'id': 16348744, 'node_id': 'MDQ6VXNlcjE2MzQ4NzQ0', 'avatar_url': 'https://avatars.githubusercontent.com/u/16348744?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/polinaeterna', 'html_url': 'https://github.com/polinaeterna', 'followers_url': 'https://api.github.com/users/polinaeterna/followers', 'following_url': 'https://api.github.com/users/polinaeterna/following{/other_user}', 'gists_url': 'https://api.github.com/users/polinaeterna/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/polinaeterna/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/polinaeterna/subscriptions', 'organizations_url': 'https://api.github.com/users/polinaeterna/orgs', 'repos_url': 'https://api.github.com/users/polinaeterna/repos', 'events_url': 'https://api.github.com/users/polinaeterna/events{/privacy}', 'received_events_url': 'https://api.github.com/users/polinaeterna/received_events', 'type': 'User', 'site_admin': False}
[]
open
false
No information
[]
No information
1
2022-11-29T18:00:28Z
2022-11-30T12:14:24Z
No information
CONTRIBUTOR
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5313', 'html_url': 'https://github.com/huggingface/datasets/pull/5313', 'diff_url': 'https://github.com/huggingface/datasets/pull/5313.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5313.patch', 'merged_at': None}
We say that "the data is being downloaded progressively" which is not true, it's just streamed, so I fixed it. Probably I missed some other places where it is written? Also changed docstrings for `StreamingDownloadManager`'s `download` and `extract` to reflect the same, as these docstrings are displayed in the documentation cc @lhoestq
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5313/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5313/timeline
No information
No information
107
https://api.github.com/repos/huggingface/datasets/issues/5312
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5312/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5312/comments
https://api.github.com/repos/huggingface/datasets/issues/5312/events
https://github.com/huggingface/datasets/pull/5312
1,468,352,562
PR_kwDODunzps5D5zxI
5,312
Add DatasetDict.to_pandas
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False}
[]
open
false
No information
[]
No information
4
2022-11-29T16:30:02Z
2022-11-30T14:29:02Z
No information
MEMBER
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5312', 'html_url': 'https://github.com/huggingface/datasets/pull/5312', 'diff_url': 'https://github.com/huggingface/datasets/pull/5312.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5312.patch', 'merged_at': None}
From discussions in https://github.com/huggingface/datasets/issues/5189, for tabular data it doesn't really make sense to have to do ```python df = load_dataset(...)["train"].to_pandas() ``` because many datasets are not split. In this PR I added `to_pandas` to `DatasetDict` which returns the DataFrame of all the splits: ```python df = load_dataset(...).to_pandas() ``` EDIT: and if a dataset has multiple splits: ```python df = load_dataset(...).to_pandas(splits=["train", "test"]) # or df = load_dataset(...).to_pandas(splits="all") ``` I do have one question though @merveenoyan @adrinjalali @mariosasko: Should we raise an error if there are multiple splits and ask the user to choose one explicitly ?
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5312/reactions', 'total_count': 1, '+1': 1, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5312/timeline
No information
No information
108
https://api.github.com/repos/huggingface/datasets/issues/5311
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5311/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5311/comments
https://api.github.com/repos/huggingface/datasets/issues/5311/events
https://github.com/huggingface/datasets/pull/5311
1,467,875,153
PR_kwDODunzps5D4Mm3
5,311
Add `features` param to `IterableDataset.map`
{'login': 'alvarobartt', 'id': 36760800, 'node_id': 'MDQ6VXNlcjM2NzYwODAw', 'avatar_url': 'https://avatars.githubusercontent.com/u/36760800?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/alvarobartt', 'html_url': 'https://github.com/alvarobartt', 'followers_url': 'https://api.github.com/users/alvarobartt/followers', 'following_url': 'https://api.github.com/users/alvarobartt/following{/other_user}', 'gists_url': 'https://api.github.com/users/alvarobartt/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/alvarobartt/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/alvarobartt/subscriptions', 'organizations_url': 'https://api.github.com/users/alvarobartt/orgs', 'repos_url': 'https://api.github.com/users/alvarobartt/repos', 'events_url': 'https://api.github.com/users/alvarobartt/events{/privacy}', 'received_events_url': 'https://api.github.com/users/alvarobartt/received_events', 'type': 'User', 'site_admin': False}
[]
open
false
No information
[]
No information
1
2022-11-29T11:08:34Z
2022-11-30T08:55:38Z
No information
CONTRIBUTOR
No information
True
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5311', 'html_url': 'https://github.com/huggingface/datasets/pull/5311', 'diff_url': 'https://github.com/huggingface/datasets/pull/5311.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5311.patch', 'merged_at': None}
## Description As suggested by @lhoestq in #3888, we should be adding the param `features` to `IterableDataset.map` so that the features can be preserved (not turned into `None` as that's the default behavior) whenever the user passes those as param, so as to be consistent with `Dataset.map`, as it provides the `features` param so that those are not inferred by default, but specified by the user, and later validated by `ArrowWriter`. This is internally handled already by the functions relying on `IterableDataset.map` such as `rename_column`, `rename_columns`, and `remove_columns` as described in #5287. ## Usage Example ```python from datasets import load_dataset, Features ds = load_dataset("rotten_tomatoes", split="validation", streaming=True) print(ds.info.features) ds = ds.map( lambda x: {"target": x["label"]}, features=Features( {"target": ds.info.features["label"], "label": ds.info.features["label"], "text": ds.info.features["text"]} ), ) print(ds.info.features) ```
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5311/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5311/timeline
No information
No information
109
https://api.github.com/repos/huggingface/datasets/issues/5310
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5310/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5310/comments
https://api.github.com/repos/huggingface/datasets/issues/5310/events
https://github.com/huggingface/datasets/pull/5310
1,467,719,635
PR_kwDODunzps5D3rGw
5,310
Support xPath for Windows pathnames
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
1
2022-11-29T09:20:47Z
2022-11-30T12:00:09Z
2022-11-30T11:57:16Z
MEMBER
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5310', 'html_url': 'https://github.com/huggingface/datasets/pull/5310', 'diff_url': 'https://github.com/huggingface/datasets/pull/5310.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5310.patch', 'merged_at': '2022-11-30T11:57:16Z'}
This PR implements a string representation of `xPath`, which is valid for local paths (also windows) and remote URLs. Additionally, some `os.path` methods are fixed for remote URLs on Windows machines. Now, on Windows machines: ```python In [2]: str(xPath("C:\\dir\\file.txt")) Out[2]: 'C:\\dir\\file.txt' In [3]: str(xPath("http://domain.com/file.txt")) Out[3]: 'http://domain.com/file.txt' ```
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5310/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5310/timeline
No information
No information
110
https://api.github.com/repos/huggingface/datasets/issues/5309
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5309/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5309/comments
https://api.github.com/repos/huggingface/datasets/issues/5309/events
https://github.com/huggingface/datasets/pull/5309
1,466,758,987
PR_kwDODunzps5D0g1y
5,309
Close stream in `ArrowWriter.finalize` before inference error
{'login': 'mariosasko', 'id': 47462742, 'node_id': 'MDQ6VXNlcjQ3NDYyNzQy', 'avatar_url': 'https://avatars.githubusercontent.com/u/47462742?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariosasko', 'html_url': 'https://github.com/mariosasko', 'followers_url': 'https://api.github.com/users/mariosasko/followers', 'following_url': 'https://api.github.com/users/mariosasko/following{/other_user}', 'gists_url': 'https://api.github.com/users/mariosasko/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/mariosasko/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/mariosasko/subscriptions', 'organizations_url': 'https://api.github.com/users/mariosasko/orgs', 'repos_url': 'https://api.github.com/users/mariosasko/repos', 'events_url': 'https://api.github.com/users/mariosasko/events{/privacy}', 'received_events_url': 'https://api.github.com/users/mariosasko/received_events', 'type': 'User', 'site_admin': False}
[]
open
false
No information
[]
No information
1
2022-11-28T16:59:39Z
2022-11-28T17:05:59Z
No information
CONTRIBUTOR
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5309', 'html_url': 'https://github.com/huggingface/datasets/pull/5309', 'diff_url': 'https://github.com/huggingface/datasets/pull/5309.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5309.patch', 'merged_at': None}
Ensure the file stream is closed in `ArrowWriter.finalize` before raising the `SchemaInferenceError` to avoid the `PermissionError` on Windows in `incomplete_dir`'s `shutil.rmtree`.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5309/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5309/timeline
No information
No information
111
https://api.github.com/repos/huggingface/datasets/issues/5308
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5308/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5308/comments
https://api.github.com/repos/huggingface/datasets/issues/5308/events
https://github.com/huggingface/datasets/pull/5308
1,466,552,281
PR_kwDODunzps5Dz0Tv
5,308
Support `topdown` parameter in `xwalk`
{'login': 'mariosasko', 'id': 47462742, 'node_id': 'MDQ6VXNlcjQ3NDYyNzQy', 'avatar_url': 'https://avatars.githubusercontent.com/u/47462742?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariosasko', 'html_url': 'https://github.com/mariosasko', 'followers_url': 'https://api.github.com/users/mariosasko/followers', 'following_url': 'https://api.github.com/users/mariosasko/following{/other_user}', 'gists_url': 'https://api.github.com/users/mariosasko/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/mariosasko/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/mariosasko/subscriptions', 'organizations_url': 'https://api.github.com/users/mariosasko/orgs', 'repos_url': 'https://api.github.com/users/mariosasko/repos', 'events_url': 'https://api.github.com/users/mariosasko/events{/privacy}', 'received_events_url': 'https://api.github.com/users/mariosasko/received_events', 'type': 'User', 'site_admin': False}
[]
open
false
No information
[]
No information
1
2022-11-28T14:42:41Z
2022-11-30T12:44:35Z
No information
CONTRIBUTOR
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5308', 'html_url': 'https://github.com/huggingface/datasets/pull/5308', 'diff_url': 'https://github.com/huggingface/datasets/pull/5308.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5308.patch', 'merged_at': None}
Add support for the `topdown` parameter in `xwalk` when `fsspec>=2022.11.0` is installed.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5308/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5308/timeline
No information
No information
112
https://api.github.com/repos/huggingface/datasets/issues/5307
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5307/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5307/comments
https://api.github.com/repos/huggingface/datasets/issues/5307/events
https://github.com/huggingface/datasets/pull/5307
1,466,477,427
PR_kwDODunzps5Dzj8r
5,307
Use correct dataset type in `from_generator` docs
{'login': 'mariosasko', 'id': 47462742, 'node_id': 'MDQ6VXNlcjQ3NDYyNzQy', 'avatar_url': 'https://avatars.githubusercontent.com/u/47462742?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariosasko', 'html_url': 'https://github.com/mariosasko', 'followers_url': 'https://api.github.com/users/mariosasko/followers', 'following_url': 'https://api.github.com/users/mariosasko/following{/other_user}', 'gists_url': 'https://api.github.com/users/mariosasko/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/mariosasko/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/mariosasko/subscriptions', 'organizations_url': 'https://api.github.com/users/mariosasko/orgs', 'repos_url': 'https://api.github.com/users/mariosasko/repos', 'events_url': 'https://api.github.com/users/mariosasko/events{/privacy}', 'received_events_url': 'https://api.github.com/users/mariosasko/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
1
2022-11-28T13:59:10Z
2022-11-28T15:30:37Z
2022-11-28T15:27:26Z
CONTRIBUTOR
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5307', 'html_url': 'https://github.com/huggingface/datasets/pull/5307', 'diff_url': 'https://github.com/huggingface/datasets/pull/5307.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5307.patch', 'merged_at': '2022-11-28T15:27:26Z'}
Use the correct dataset type in the `from_generator` docs (example with sharding).
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5307/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5307/timeline
No information
No information
113
https://api.github.com/repos/huggingface/datasets/issues/5306
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5306/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5306/comments
https://api.github.com/repos/huggingface/datasets/issues/5306/events
https://github.com/huggingface/datasets/issues/5306
1,465,968,639
I_kwDODunzps5XYOf_
5,306
Can't use custom feature description when loading a dataset
{'login': 'clefourrier', 'id': 22726840, 'node_id': 'MDQ6VXNlcjIyNzI2ODQw', 'avatar_url': 'https://avatars.githubusercontent.com/u/22726840?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/clefourrier', 'html_url': 'https://github.com/clefourrier', 'followers_url': 'https://api.github.com/users/clefourrier/followers', 'following_url': 'https://api.github.com/users/clefourrier/following{/other_user}', 'gists_url': 'https://api.github.com/users/clefourrier/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/clefourrier/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/clefourrier/subscriptions', 'organizations_url': 'https://api.github.com/users/clefourrier/orgs', 'repos_url': 'https://api.github.com/users/clefourrier/repos', 'events_url': 'https://api.github.com/users/clefourrier/events{/privacy}', 'received_events_url': 'https://api.github.com/users/clefourrier/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
1
2022-11-28T07:55:44Z
2022-11-28T08:11:45Z
2022-11-28T08:11:44Z
CONTRIBUTOR
No information
No information
No information
### Describe the bug I have created a feature dictionary to describe my datasets' column types, to use when loading the dataset, following [the doc](https://huggingface.co/docs/datasets/main/en/about_dataset_features). It crashes at dataset load. ### Steps to reproduce the bug ```python # Creating features task_list = [f"motif_G{i}" for i in range(19, 53)] features = {t: Sequence(feature=Value(dtype="float64")) for t in task_list} for col_name in ["class_label"]: features[col_name] = Sequence(feature=Value(dtype="int64")) for col_name in ["num_nodes"]: features[col_name] = Value(dtype="int64") for col_name in ["num_bridges", "num_cycles", "avg_shortest_path_len"]: features[col_name] = Sequence(feature=Value(dtype="float64")) for col_name in ["edge_attr", "node_feat", "edge_index"]: features[col_name] = Sequence(feature=Sequence(feature=Value(dtype="int64"))) print(features) dataset = load_dataset(path=f"graphs-datasets/unbalanced-motifs-500K", split="train", features=features) ``` Last line will crash and say 'TypeError: argument of type 'Sequence' is not iterable'. Full stack: ``` Traceback (most recent call last): File "pretrain_tokengt.py", line 131, in <module> main(output_folder = "../workspace/pretraining", File "pretrain_tokengt.py", line 52, in main dataset = load_dataset(path=f"graphs-datasets/{dataset_name}", split="train", features=features) File "huggingface_env/lib/python3.8/site-packages/datasets/load.py", line 1718, in load_dataset builder_instance = load_dataset_builder( File "huggingface_env/lib/python3.8/site-packages/datasets/load.py", line 1514, in load_dataset_builder builder_instance: DatasetBuilder = builder_cls( File "huggingface_env/lib/python3.8/site-packages/datasets/builder.py", line 321, in __init__ info.update(self._info()) File "huggingface_env/lib/python3.8/site-packages/datasets/packaged_modules/json/json.py", line 62, in _info return datasets.DatasetInfo(features=self.config.features) File "<string>", line 20, in __init__ File "huggingface_env/lib/python3.8/site-packages/datasets/info.py", line 155, in __post_init__ self.features = Features.from_dict(self.features) File "huggingface_env/lib/python3.8/site-packages/datasets/features/features.py", line 1599, in from_dict obj = generate_from_dict(dic) File "huggingface_env/lib/python3.8/site-packages/datasets/features/features.py", line 1282, in generate_from_dict return {key: generate_from_dict(value) for key, value in obj.items()} File "huggingface_env/lib/python3.8/site-packages/datasets/features/features.py", line 1282, in <dictcomp> return {key: generate_from_dict(value) for key, value in obj.items()} File "huggingface_env/lib/python3.8/site-packages/datasets/features/features.py", line 1281, in generate_from_dict if "_type" not in obj or isinstance(obj["_type"], dict): TypeError: argument of type 'Sequence' is not iterable ``` ### Expected behavior For it not to crash. ### Environment info - `datasets` version: 2.7.1 - Platform: Linux-5.14.0-1054-oem-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5306/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5306/timeline
No information
completed
114
https://api.github.com/repos/huggingface/datasets/issues/5305
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5305/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5305/comments
https://api.github.com/repos/huggingface/datasets/issues/5305/events
https://github.com/huggingface/datasets/issues/5305
1,465,627,826
I_kwDODunzps5XW7Sy
5,305
Dataset joelito/mc4_legal does not work with multiple files
{'login': 'JoelNiklaus', 'id': 3775944, 'node_id': 'MDQ6VXNlcjM3NzU5NDQ=', 'avatar_url': 'https://avatars.githubusercontent.com/u/3775944?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/JoelNiklaus', 'html_url': 'https://github.com/JoelNiklaus', 'followers_url': 'https://api.github.com/users/JoelNiklaus/followers', 'following_url': 'https://api.github.com/users/JoelNiklaus/following{/other_user}', 'gists_url': 'https://api.github.com/users/JoelNiklaus/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/JoelNiklaus/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/JoelNiklaus/subscriptions', 'organizations_url': 'https://api.github.com/users/JoelNiklaus/orgs', 'repos_url': 'https://api.github.com/users/JoelNiklaus/repos', 'events_url': 'https://api.github.com/users/JoelNiklaus/events{/privacy}', 'received_events_url': 'https://api.github.com/users/JoelNiklaus/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}
[{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}]
No information
2
2022-11-28T00:16:16Z
2022-11-28T07:22:42Z
2022-11-28T07:22:42Z
CONTRIBUTOR
No information
No information
No information
### Describe the bug The dataset https://huggingface.co/datasets/joelito/mc4_legal works for languages like bg with a single data file, but not for languages with multiple files like de. It shows zero rows for the de dataset. joelniklaus@Joels-MacBook-Pro ~/N/P/C/L/p/m/mc4_legal (main) [1]> python test_mc4_legal.py (debug) Found cached dataset mc4_legal (/Users/joelniklaus/.cache/huggingface/datasets/mc4_legal/de/0.0.0/fb6952a097180f8c936e2a7605525ff670354a344fc1a2c70107684d3f7cb02f) Dataset({ features: ['index', 'url', 'timestamp', 'matches', 'text'], num_rows: 0 }) joelniklaus@Joels-MacBook-Pro ~/N/P/C/L/p/m/mc4_legal (main)> python test_mc4_legal.py (debug) Downloading and preparing dataset mc4_legal/bg to /Users/joelniklaus/.cache/huggingface/datasets/mc4_legal/bg/0.0.0/fb6952a097180f8c936e2a7605525ff670354a344fc1a2c70107684d3f7cb02f... Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 1240.55it/s] Dataset mc4_legal downloaded and prepared to /Users/joelniklaus/.cache/huggingface/datasets/mc4_legal/bg/0.0.0/fb6952a097180f8c936e2a7605525ff670354a344fc1a2c70107684d3f7cb02f. Subsequent calls will reuse this data. Dataset({ features: ['index', 'url', 'timestamp', 'matches', 'text'], num_rows: 204 }) ### Steps to reproduce the bug import datasets from datasets import load_dataset, get_dataset_config_names language = "bg" test = load_dataset("joelito/mc4_legal", language, split='train') ### Expected behavior It should display the correct number of rows for the de dataset which should be a large number (thousands or more). ### Environment info Package Version ------------------------ -------------- absl-py 1.3.0 aiohttp 3.8.1 aiosignal 1.2.0 astunparse 1.6.3 async-timeout 4.0.2 attrs 22.1.0 beautifulsoup4 4.11.1 blinker 1.4 blis 0.7.8 Bottleneck 1.3.4 brotlipy 0.7.0 cachetools 5.2.0 catalogue 2.0.7 certifi 2022.5.18.1 cffi 1.15.1 chardet 4.0.0 charset-normalizer 2.1.0 click 8.0.4 conllu 4.5.2 cryptography 38.0.1 cymem 2.0.6 datasets 2.6.1 dill 0.3.5.1 docker-pycreds 0.4.0 fasttext 0.9.2 fasttext-langdetect 1.0.3 filelock 3.0.12 flatbuffers 20210226132247 frozenlist 1.3.0 fsspec 2022.5.0 gast 0.4.0 gcloud 0.18.3 gitdb 4.0.9 GitPython 3.1.27 google-auth 2.9.0 google-auth-oauthlib 0.4.6 google-pasta 0.2.0 googleapis-common-protos 1.57.0 grpcio 1.47.0 h5py 3.7.0 httplib2 0.21.0 huggingface-hub 0.8.1 idna 3.4 importlib-metadata 4.12.0 Jinja2 3.1.2 joblib 1.0.1 keras 2.9.0 Keras-Preprocessing 1.1.2 langcodes 3.3.0 lxml 4.9.1 Markdown 3.3.7 MarkupSafe 2.1.1 mkl-fft 1.3.1 mkl-random 1.2.2 mkl-service 2.4.0 multidict 6.0.2 multiprocess 0.70.13 murmurhash 1.0.7 numexpr 2.8.1 numpy 1.22.3 oauth2client 4.1.3 oauthlib 3.2.1 opt-einsum 3.3.0 packaging 21.3 pandas 1.4.2 pathtools 0.1.2 pathy 0.6.1 pip 21.1.2 preshed 3.0.6 promise 2.3 protobuf 4.21.9 psutil 5.9.1 pyarrow 8.0.0 pyasn1 0.4.8 pyasn1-modules 0.2.8 pybind11 2.9.2 pycountry 22.3.5 pycparser 2.21 pydantic 1.8.2 PyJWT 2.4.0 pylzma 0.5.0 pyOpenSSL 22.0.0 pyparsing 3.0.4 PySocks 1.7.1 python-dateutil 2.8.2 pytz 2021.3 PyYAML 6.0 regex 2021.4.4 requests 2.28.1 requests-oauthlib 1.3.1 responses 0.18.0 rsa 4.8 sacremoses 0.0.45 scikit-learn 1.1.1 scipy 1.8.1 sentencepiece 0.1.96 sentry-sdk 1.6.0 setproctitle 1.2.3 setuptools 65.5.0 shortuuid 1.0.9 six 1.16.0 smart-open 5.2.1 smmap 5.0.0 soupsieve 2.3.2.post1 spacy 3.3.1 spacy-legacy 3.0.9 spacy-loggers 1.0.2 srsly 2.4.3 tabulate 0.8.9 tensorboard 2.9.1 tensorboard-data-server 0.6.1 tensorboard-plugin-wit 1.8.1 tensorflow 2.9.1 tensorflow-estimator 2.9.0 termcolor 2.1.0 thinc 8.0.17 threadpoolctl 3.1.0 tokenizers 0.12.1 torch 1.13.0 tqdm 4.64.0 transformers 4.20.1 typer 0.4.1 typing-extensions 4.3.0 Unidecode 1.3.6 urllib3 1.26.12 wandb 0.12.20 wasabi 0.9.1 web-anno-tsv 0.0.1 Werkzeug 2.1.2 wget 3.2 wheel 0.35.1 wrapt 1.14.1 xxhash 3.0.0 yarl 1.8.1 zipp 3.8.0 Python 3.8.10
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5305/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5305/timeline
No information
completed
115
https://api.github.com/repos/huggingface/datasets/issues/5304
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5304/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5304/comments
https://api.github.com/repos/huggingface/datasets/issues/5304/events
https://github.com/huggingface/datasets/issues/5304
1,465,110,367
I_kwDODunzps5XU89f
5,304
timit_asr doesn't load the test split.
{'login': 'seyong92', 'id': 17842800, 'node_id': 'MDQ6VXNlcjE3ODQyODAw', 'avatar_url': 'https://avatars.githubusercontent.com/u/17842800?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/seyong92', 'html_url': 'https://github.com/seyong92', 'followers_url': 'https://api.github.com/users/seyong92/followers', 'following_url': 'https://api.github.com/users/seyong92/following{/other_user}', 'gists_url': 'https://api.github.com/users/seyong92/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/seyong92/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/seyong92/subscriptions', 'organizations_url': 'https://api.github.com/users/seyong92/orgs', 'repos_url': 'https://api.github.com/users/seyong92/repos', 'events_url': 'https://api.github.com/users/seyong92/events{/privacy}', 'received_events_url': 'https://api.github.com/users/seyong92/received_events', 'type': 'User', 'site_admin': False}
[]
open
false
No information
[]
No information
0
2022-11-26T10:18:22Z
2022-11-26T10:18:22Z
No information
NONE
No information
No information
No information
### Describe the bug When I use the function ```timit = load_dataset('timit_asr', data_dir=data_dir)```, it only loads train split, not test split. I tried to change the directory and filename to lower case to upper case for the test split, but it does not work at all. ```python DatasetDict({ train: Dataset({ features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'], num_rows: 4620 }) test: Dataset({ features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'], num_rows: 0 }) }) ``` The directory structure of both splits are same. (DIALECT_REGION / SPEAKER_CODE / DATA_FILES) ### Steps to reproduce the bug 1. just use ```timit = load_dataset('timit_asr', data_dir=data_dir)``` ### Expected behavior ```python DatasetDict({ train: Dataset({ features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'], num_rows: 4620 }) test: Dataset({ features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'], num_rows: 1680 }) }) ``` ### Environment info - ubuntu 20.04 - python 3.9.13 - datasets 2.7.1
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5304/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5304/timeline
No information
No information
116
https://api.github.com/repos/huggingface/datasets/issues/5303
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5303/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5303/comments
https://api.github.com/repos/huggingface/datasets/issues/5303/events
https://github.com/huggingface/datasets/pull/5303
1,464,837,251
PR_kwDODunzps5DuVTa
5,303
Skip dataset verifications by default
{'login': 'mariosasko', 'id': 47462742, 'node_id': 'MDQ6VXNlcjQ3NDYyNzQy', 'avatar_url': 'https://avatars.githubusercontent.com/u/47462742?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariosasko', 'html_url': 'https://github.com/mariosasko', 'followers_url': 'https://api.github.com/users/mariosasko/followers', 'following_url': 'https://api.github.com/users/mariosasko/following{/other_user}', 'gists_url': 'https://api.github.com/users/mariosasko/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/mariosasko/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/mariosasko/subscriptions', 'organizations_url': 'https://api.github.com/users/mariosasko/orgs', 'repos_url': 'https://api.github.com/users/mariosasko/repos', 'events_url': 'https://api.github.com/users/mariosasko/events{/privacy}', 'received_events_url': 'https://api.github.com/users/mariosasko/received_events', 'type': 'User', 'site_admin': False}
[]
open
false
No information
[]
No information
1
2022-11-25T18:39:09Z
2022-11-25T18:44:23Z
No information
CONTRIBUTOR
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5303', 'html_url': 'https://github.com/huggingface/datasets/pull/5303', 'diff_url': 'https://github.com/huggingface/datasets/pull/5303.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5303.patch', 'merged_at': None}
Skip the dataset verifications (split and checksum verifications, duplicate keys check) by default unless a dataset is being tested (`datasets-cli test/run_beam`). The main goal is to avoid running the checksum check in the default case due to how expensive it can be for large datasets. PS: Maybe we should deprecate `ignore_verifications`, which is `True` now by default, and give it a different name?
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5303/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5303/timeline
No information
No information
117
https://api.github.com/repos/huggingface/datasets/issues/5302
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5302/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5302/comments
https://api.github.com/repos/huggingface/datasets/issues/5302/events
https://github.com/huggingface/datasets/pull/5302
1,464,778,901
PR_kwDODunzps5DuJJp
5,302
Improve `use_auth_token` docstring and deprecate `use_auth_token` in `download_and_prepare`
{'login': 'mariosasko', 'id': 47462742, 'node_id': 'MDQ6VXNlcjQ3NDYyNzQy', 'avatar_url': 'https://avatars.githubusercontent.com/u/47462742?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariosasko', 'html_url': 'https://github.com/mariosasko', 'followers_url': 'https://api.github.com/users/mariosasko/followers', 'following_url': 'https://api.github.com/users/mariosasko/following{/other_user}', 'gists_url': 'https://api.github.com/users/mariosasko/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/mariosasko/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/mariosasko/subscriptions', 'organizations_url': 'https://api.github.com/users/mariosasko/orgs', 'repos_url': 'https://api.github.com/users/mariosasko/repos', 'events_url': 'https://api.github.com/users/mariosasko/events{/privacy}', 'received_events_url': 'https://api.github.com/users/mariosasko/received_events', 'type': 'User', 'site_admin': False}
[]
open
false
No information
[]
No information
1
2022-11-25T17:09:21Z
2022-11-28T12:40:12Z
No information
CONTRIBUTOR
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5302', 'html_url': 'https://github.com/huggingface/datasets/pull/5302', 'diff_url': 'https://github.com/huggingface/datasets/pull/5302.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5302.patch', 'merged_at': None}
Clarify in the docstrings what happens when `use_auth_token` is `None` and deprecate the `use_auth_token` param in `download_and_prepare`.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5302/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5302/timeline
No information
No information
118
https://api.github.com/repos/huggingface/datasets/issues/5301
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5301/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5301/comments
https://api.github.com/repos/huggingface/datasets/issues/5301/events
https://github.com/huggingface/datasets/pull/5301
1,464,749,156
PR_kwDODunzps5DuCzR
5,301
Return a split Dataset in load_dataset
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False}
[]
open
false
No information
[]
No information
2
2022-11-25T16:35:54Z
2022-11-30T16:53:34Z
No information
MEMBER
No information
True
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5301', 'html_url': 'https://github.com/huggingface/datasets/pull/5301', 'diff_url': 'https://github.com/huggingface/datasets/pull/5301.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5301.patch', 'merged_at': None}
...instead of a DatasetDict. ```python # now supported ds = load_dataset("squad") ds[0] for example in ds: pass # still works ds["train"] ds["validation"] # new ds.splits # Dict[str, Dataset] | None # soon to be supported (not in this PR) ds = load_dataset("dataset_with_no_splits") ds[0] for example in ds: pass ``` I implemented `Dataset.__getitem__` and `IterableDataset.__getitem__` to be able to get a split from a dataset. The splits are defined by the `ds.info.splits` dictionary. Therefore a dataset is a table that optionally has some splits defined in the dataset info. And a split dataset is the concatenation of all its splits. I made as little breaking changes as possible. Notable breaking changes: - `load_dataset("potato").keys() / .items() / .values() /` don't work anymore, since we don't return a dict - same for `for split_name in load_dataset("potato")`, since we now iterate on the examples - .. TODO: - [x] Update push_to_hub - [x] Update save_to_disk/load_from_disk - [ ] check for other breaking changes - [ ] fix existing tests - [ ] add new tests - [ ] docs This is related to https://github.com/huggingface/datasets/issues/5189, to extend `load_dataset` to return datasets without splits
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5301/reactions', 'total_count': 1, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 1, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5301/timeline
No information
No information
119
https://api.github.com/repos/huggingface/datasets/issues/5300
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5300/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5300/comments
https://api.github.com/repos/huggingface/datasets/issues/5300/events
https://github.com/huggingface/datasets/pull/5300
1,464,697,136
PR_kwDODunzps5Dt3uK
5,300
Use same `num_proc` for dataset download and generation
{'login': 'mariosasko', 'id': 47462742, 'node_id': 'MDQ6VXNlcjQ3NDYyNzQy', 'avatar_url': 'https://avatars.githubusercontent.com/u/47462742?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariosasko', 'html_url': 'https://github.com/mariosasko', 'followers_url': 'https://api.github.com/users/mariosasko/followers', 'following_url': 'https://api.github.com/users/mariosasko/following{/other_user}', 'gists_url': 'https://api.github.com/users/mariosasko/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/mariosasko/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/mariosasko/subscriptions', 'organizations_url': 'https://api.github.com/users/mariosasko/orgs', 'repos_url': 'https://api.github.com/users/mariosasko/repos', 'events_url': 'https://api.github.com/users/mariosasko/events{/privacy}', 'received_events_url': 'https://api.github.com/users/mariosasko/received_events', 'type': 'User', 'site_admin': False}
[]
open
false
No information
[]
No information
2
2022-11-25T15:37:42Z
2022-11-25T15:52:04Z
No information
CONTRIBUTOR
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5300', 'html_url': 'https://github.com/huggingface/datasets/pull/5300', 'diff_url': 'https://github.com/huggingface/datasets/pull/5300.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5300.patch', 'merged_at': None}
Use the same `num_proc` value for data download and generation. Additionally, do not set `num_proc` to 16 in `DownloadManager` by default (`num_proc` now has to be specified explicitly).
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5300/reactions', 'total_count': 1, '+1': 1, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5300/timeline
No information
No information
120
https://api.github.com/repos/huggingface/datasets/issues/5299
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5299/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5299/comments
https://api.github.com/repos/huggingface/datasets/issues/5299/events
https://github.com/huggingface/datasets/pull/5299
1,464,695,091
PR_kwDODunzps5Dt3Sk
5,299
Fix xopen for Windows pathnames
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
1
2022-11-25T15:35:28Z
2022-11-29T08:23:58Z
2022-11-29T08:21:24Z
MEMBER
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5299', 'html_url': 'https://github.com/huggingface/datasets/pull/5299', 'diff_url': 'https://github.com/huggingface/datasets/pull/5299.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5299.patch', 'merged_at': '2022-11-29T08:21:24Z'}
This PR fixes a bug in `xopen` function for Windows pathnames. Fix #5298.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5299/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5299/timeline
No information
No information
121
https://api.github.com/repos/huggingface/datasets/issues/5298
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5298/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5298/comments
https://api.github.com/repos/huggingface/datasets/issues/5298/events
https://github.com/huggingface/datasets/issues/5298
1,464,681,871
I_kwDODunzps5XTUWP
5,298
Bug in xopen with Windows pathnames
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}
[{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}]
closed
false
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}
[{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}]
No information
0
2022-11-25T15:21:32Z
2022-11-29T08:21:25Z
2022-11-29T08:21:25Z
MEMBER
No information
No information
No information
Currently, `xopen` function has a bug with local Windows pathnames: From its implementation: ```python def xopen(file: str, mode="r", *args, **kwargs): file = _as_posix(PurePath(file)) main_hop, *rest_hops = file.split("::") if is_local_path(main_hop): return open(file, mode, *args, **kwargs) ``` On a Windows machine, if we pass the argument: ```python xopen("C:\\Users\\USERNAME\\filename.txt") ``` it returns ```python open("C:/Users/USERNAME/filename.txt") ```
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5298/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5298/timeline
No information
completed
122
https://api.github.com/repos/huggingface/datasets/issues/5297
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5297/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5297/comments
https://api.github.com/repos/huggingface/datasets/issues/5297/events
https://github.com/huggingface/datasets/pull/5297
1,464,554,491
PR_kwDODunzps5DtZjg
5,297
Fix xjoin for Windows pathnames
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
1
2022-11-25T13:30:17Z
2022-11-29T08:07:39Z
2022-11-29T08:05:12Z
MEMBER
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5297', 'html_url': 'https://github.com/huggingface/datasets/pull/5297', 'diff_url': 'https://github.com/huggingface/datasets/pull/5297.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5297.patch', 'merged_at': '2022-11-29T08:05:12Z'}
This PR fixes a bug in `xjoin` function with Windows pathnames. Fix #5296.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5297/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5297/timeline
No information
No information
123
https://api.github.com/repos/huggingface/datasets/issues/5296
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5296/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5296/comments
https://api.github.com/repos/huggingface/datasets/issues/5296/events
https://github.com/huggingface/datasets/issues/5296
1,464,553,580
I_kwDODunzps5XS1Bs
5,296
Bug in xjoin with Windows pathnames
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}
[{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}]
closed
false
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}
[{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}]
No information
0
2022-11-25T13:29:33Z
2022-11-29T08:05:13Z
2022-11-29T08:05:13Z
MEMBER
No information
No information
No information
Currently, `xjoin` function has a bug with local Windows pathnames: instead of returning the OS-dependent join pathname, it always returns it in POSIX format. ```python from datasets.download.streaming_download_manager import xjoin path = xjoin("C:\\Users\\USERNAME", "filename.txt") ``` Join path should be: ```python "C:\\Users\\USERNAME\\filename.txt" ``` However it is: ```python "C:/Users/USERNAME/filename.txt" ```
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5296/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5296/timeline
No information
completed
124
https://api.github.com/repos/huggingface/datasets/issues/5295
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5295/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5295/comments
https://api.github.com/repos/huggingface/datasets/issues/5295/events
https://github.com/huggingface/datasets/issues/5295
1,464,006,743
I_kwDODunzps5XQvhX
5,295
Extractions failed when .zip file located on read-only path (e.g., SageMaker FastFile mode)
{'login': 'verdimrc', 'id': 2340781, 'node_id': 'MDQ6VXNlcjIzNDA3ODE=', 'avatar_url': 'https://avatars.githubusercontent.com/u/2340781?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/verdimrc', 'html_url': 'https://github.com/verdimrc', 'followers_url': 'https://api.github.com/users/verdimrc/followers', 'following_url': 'https://api.github.com/users/verdimrc/following{/other_user}', 'gists_url': 'https://api.github.com/users/verdimrc/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/verdimrc/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/verdimrc/subscriptions', 'organizations_url': 'https://api.github.com/users/verdimrc/orgs', 'repos_url': 'https://api.github.com/users/verdimrc/repos', 'events_url': 'https://api.github.com/users/verdimrc/events{/privacy}', 'received_events_url': 'https://api.github.com/users/verdimrc/received_events', 'type': 'User', 'site_admin': False}
[]
open
false
No information
[]
No information
0
2022-11-25T03:59:43Z
2022-11-25T05:03:03Z
No information
NONE
No information
No information
No information
### Describe the bug Hi, `load_dataset()` does not work .zip files located on a read-only directory. Looks like it's because Dataset creates a lock file in the [same directory](https://github.com/huggingface/datasets/blob/df4bdd365f2abb695f113cbf8856a925bc70901b/src/datasets/utils/extract.py) as the .zip file. Encountered this when attempting `load_dataset()` on a datadir with SageMaker FastFile mode. ### Steps to reproduce the bug ```python # Showing relevant lines only. hyperparameters = { "dataset_name": "ydshieh/coco_dataset_script", "dataset_config_name": 2017, "data_dir": "/opt/ml/input/data/coco", "cache_dir": "/tmp/huggingface-cache", # Fix dataset complains out-of-space. ... } estimator = PyTorch( base_job_name="clip", source_dir="../src/sm-entrypoint", entry_point="run_clip.py", # Transformers/src/examples/pytorch/contrastive-image-text/run_clip.py framework_version="1.12", py_version="py38", hyperparameters=hyperparameters, instance_count=1, instance_type="ml.p3.16xlarge", volume_size=100, distribution={"smdistributed": {"dataparallel": {"enabled": True}}}, ) fast_file = lambda x: TrainingInput(x, input_mode='FastFile') estimator.fit( { "pre-trained": fast_file("s3://vm-sagemakerr-us-east-1/clip/pre-trained-checkpoint/"), "coco": fast_file("s3://vm-sagemakerr-us-east-1/clip/coco-zip-files/"), } ) ``` Error message: ```text ErrorMessage "OSError: [Errno 30] Read-only file system: '/opt/ml/input/data/coco/image_info_test2017.zip.lock' """ The above exception was the direct cause of the following exception Traceback (most recent call last) File "/opt/conda/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/opt/conda/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/opt/conda/lib/python3.8/site-packages/mpi4py/__main__.py", line 7, in <module> main() File "/opt/conda/lib/python3.8/site-packages/mpi4py/run.py", line 198, in main run_command_line(args) File "/opt/conda/lib/python3.8/site-packages/mpi4py/run.py", line 47, in run_command_line run_path(sys.argv[0], run_name='__main__') File "/opt/conda/lib/python3.8/runpy.py", line 265, in run_path return _run_module_code(code, init_globals, run_name, File "/opt/conda/lib/python3.8/runpy.py", line 97, in _run_module_code _run_code(code, mod_globals, init_globals, File "run_clip_smddp.py", line 594, in <module> File "run_clip_smddp.py", line 327, in main dataset = load_dataset( File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1741, in load_dataset builder_instance.download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 822, in download_and_prepare self._download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 1555, in _download_and_prepare super()._download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 891, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/ydshieh--coco_dataset_script/e033205c0266a54c10be132f9264f2a39dcf893e798f6756d224b1ff5078998f/coco_dataset_script.py", line 123, in _split_generators archive_path = dl_manager.download_and_extract(_DL_URLS) File "/opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py", line 447, in download_and_extract return self.extract(self.download(url_or_urls)) File "/opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py", line 419, in extract extracted_paths = map_nested( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 472, in map_nested mapped = pool.map(_single_map_nested, split_kwds) File "/opt/conda/lib/python3.8/multiprocessing/pool.py", line 364, in map return self._map_async(func, iterable, mapstar, chunksize).get() File "/opt/conda/lib/python3.8/multiprocessing/pool.py", line 771, in get raise self._value OSError: [Errno 30] Read-only file system: '/opt/ml/input/data/coco/image_info_test2017.zip.lock'" ``` ### Expected behavior `load_dataset()` to succeed, just like when .zip file is passed in SageMaker File mode. ### Environment info * datasets-2.7.1 * transformers-4.24.0 * python-3.8 * torch-1.12 * SageMaker PyTorch DLC
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5295/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5295/timeline
No information
No information
125
https://api.github.com/repos/huggingface/datasets/issues/5294
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5294/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5294/comments
https://api.github.com/repos/huggingface/datasets/issues/5294/events
https://github.com/huggingface/datasets/pull/5294
1,463,679,582
PR_kwDODunzps5DqgLW
5,294
Support streaming datasets with pathlib.Path.with_suffix
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
1
2022-11-24T18:04:38Z
2022-11-29T07:09:08Z
2022-11-29T07:06:32Z
MEMBER
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5294', 'html_url': 'https://github.com/huggingface/datasets/pull/5294', 'diff_url': 'https://github.com/huggingface/datasets/pull/5294.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5294.patch', 'merged_at': '2022-11-29T07:06:32Z'}
This PR extends the support in streaming mode for datasets that use `pathlib.Path.with_suffix`. Fix #5293.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5294/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5294/timeline
No information
No information
126
https://api.github.com/repos/huggingface/datasets/issues/5293
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5293/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5293/comments
https://api.github.com/repos/huggingface/datasets/issues/5293/events
https://github.com/huggingface/datasets/issues/5293
1,463,669,201
I_kwDODunzps5XPdHR
5,293
Support streaming datasets with pathlib.Path.with_suffix
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}
[{'id': 1935892871, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODcx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/enhancement', 'name': 'enhancement', 'color': 'a2eeef', 'default': True, 'description': 'New feature or request'}]
closed
false
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}
[{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}]
No information
0
2022-11-24T17:52:08Z
2022-11-29T07:06:33Z
2022-11-29T07:06:33Z
MEMBER
No information
No information
No information
Extend support for streaming datasets that use `pathlib.Path.with_suffix`. This feature will be useful e.g. for datasets containing text files and annotated files with the same name but different extension.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5293/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5293/timeline
No information
completed
127
https://api.github.com/repos/huggingface/datasets/issues/5292
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5292/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5292/comments
https://api.github.com/repos/huggingface/datasets/issues/5292/events
https://github.com/huggingface/datasets/issues/5292
1,463,053,832
I_kwDODunzps5XNG4I
5,292
Missing documentation build for versions 2.7.1 and 2.6.2
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}
[{'id': 4296013012, 'node_id': 'LA_kwDODunzps8AAAABAA_01A', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/maintenance', 'name': 'maintenance', 'color': 'd4c5f9', 'default': False, 'description': 'Maintenance tasks'}]
closed
false
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}
[{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}]
No information
1
2022-11-24T09:42:10Z
2022-11-24T10:10:02Z
2022-11-24T10:10:02Z
MEMBER
No information
No information
No information
After the patch releases [2.7.1](https://github.com/huggingface/datasets/releases/tag/2.7.1) and [2.6.2](https://github.com/huggingface/datasets/releases/tag/2.6.2), the online docs were not properly built (the build_documentation workflow was not triggered). There was a fix by: - #5291 However, both documentations were built from main branch, instead of their corresponding version branch. We are rebuilding them.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5292/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5292/timeline
No information
completed
128
https://api.github.com/repos/huggingface/datasets/issues/5291
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5291/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5291/comments
https://api.github.com/repos/huggingface/datasets/issues/5291/events
https://github.com/huggingface/datasets/pull/5291
1,462,983,472
PR_kwDODunzps5DoKNC
5,291
[build doc] for v2.7.1 & v2.6.2
{'login': 'mishig25', 'id': 11827707, 'node_id': 'MDQ6VXNlcjExODI3NzA3', 'avatar_url': 'https://avatars.githubusercontent.com/u/11827707?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mishig25', 'html_url': 'https://github.com/mishig25', 'followers_url': 'https://api.github.com/users/mishig25/followers', 'following_url': 'https://api.github.com/users/mishig25/following{/other_user}', 'gists_url': 'https://api.github.com/users/mishig25/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/mishig25/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/mishig25/subscriptions', 'organizations_url': 'https://api.github.com/users/mishig25/orgs', 'repos_url': 'https://api.github.com/users/mishig25/repos', 'events_url': 'https://api.github.com/users/mishig25/events{/privacy}', 'received_events_url': 'https://api.github.com/users/mishig25/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
2
2022-11-24T08:54:47Z
2022-11-24T09:14:10Z
2022-11-24T09:11:15Z
CONTRIBUTOR
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5291', 'html_url': 'https://github.com/huggingface/datasets/pull/5291', 'diff_url': 'https://github.com/huggingface/datasets/pull/5291.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5291.patch', 'merged_at': None}
Do NOT merge. Using this PR to build docs for [v2.7.1](https://github.com/huggingface/datasets/pull/5291/commits/f4914af20700f611b9331a9e3ba34743bbeff934) & [v2.6.2](https://github.com/huggingface/datasets/pull/5291/commits/025f85300a0874eeb90a20393c62f25ac0accaa0)
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5291/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5291/timeline
No information
No information
129
https://api.github.com/repos/huggingface/datasets/issues/5290
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5290/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5290/comments
https://api.github.com/repos/huggingface/datasets/issues/5290/events
https://github.com/huggingface/datasets/pull/5290
1,462,716,766
PR_kwDODunzps5DnQsS
5,290
fix error where reading breaks when batch missing an assigned column feature
{'login': 'eunseojo', 'id': 12104720, 'node_id': 'MDQ6VXNlcjEyMTA0NzIw', 'avatar_url': 'https://avatars.githubusercontent.com/u/12104720?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/eunseojo', 'html_url': 'https://github.com/eunseojo', 'followers_url': 'https://api.github.com/users/eunseojo/followers', 'following_url': 'https://api.github.com/users/eunseojo/following{/other_user}', 'gists_url': 'https://api.github.com/users/eunseojo/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/eunseojo/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/eunseojo/subscriptions', 'organizations_url': 'https://api.github.com/users/eunseojo/orgs', 'repos_url': 'https://api.github.com/users/eunseojo/repos', 'events_url': 'https://api.github.com/users/eunseojo/events{/privacy}', 'received_events_url': 'https://api.github.com/users/eunseojo/received_events', 'type': 'User', 'site_admin': False}
[]
open
false
No information
[]
No information
1
2022-11-24T03:53:46Z
2022-11-25T03:21:54Z
No information
NONE
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5290', 'html_url': 'https://github.com/huggingface/datasets/pull/5290', 'diff_url': 'https://github.com/huggingface/datasets/pull/5290.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5290.patch', 'merged_at': None}
No information
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5290/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5290/timeline
No information
No information
130
https://api.github.com/repos/huggingface/datasets/issues/5289
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5289/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5289/comments
https://api.github.com/repos/huggingface/datasets/issues/5289/events
https://github.com/huggingface/datasets/pull/5289
1,462,543,139
PR_kwDODunzps5Dmrk9
5,289
Added support for JXL images.
{'login': 'alexjc', 'id': 445208, 'node_id': 'MDQ6VXNlcjQ0NTIwOA==', 'avatar_url': 'https://avatars.githubusercontent.com/u/445208?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/alexjc', 'html_url': 'https://github.com/alexjc', 'followers_url': 'https://api.github.com/users/alexjc/followers', 'following_url': 'https://api.github.com/users/alexjc/following{/other_user}', 'gists_url': 'https://api.github.com/users/alexjc/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/alexjc/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/alexjc/subscriptions', 'organizations_url': 'https://api.github.com/users/alexjc/orgs', 'repos_url': 'https://api.github.com/users/alexjc/repos', 'events_url': 'https://api.github.com/users/alexjc/events{/privacy}', 'received_events_url': 'https://api.github.com/users/alexjc/received_events', 'type': 'User', 'site_admin': False}
[]
open
false
No information
[]
No information
11
2022-11-23T23:16:33Z
2022-11-29T18:49:46Z
No information
NONE
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5289', 'html_url': 'https://github.com/huggingface/datasets/pull/5289', 'diff_url': 'https://github.com/huggingface/datasets/pull/5289.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5289.patch', 'merged_at': None}
JPEG-XL is the most advanced of the next-generation of image codecs, supporting both lossless and lossy files β€” with better compression and quality than PNG and JPG respectively. It has reduced the disk sizes and bandwidth required for many of the datasets I use. Pillow does not yet support JXL, but there's a plugin as a separate Python library that does (`pip install jxlpy`), and I've tested that this change works as expected when the plugin is imported. Dataset used for testing, you must `git pull` as loading it from Python won't work until `datasets-server` is also changed to support JXL files: https://huggingface.co/datasets/texturedesign/td01_natural-ground-textures The case where the plugin is not imported first raises an error: ``` PIL.UnidentifiedImageError: cannot identify image file 'td01/train/set01/01_145523.jxl' ``` In order to enable support for JXL even before pillow supports this, should this exception be handled with a better error message? I'd expect/hope JXL support to follow in one of the pillow quarterly releases in the next 6-9 months.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5289/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5289/timeline
No information
No information
131
https://api.github.com/repos/huggingface/datasets/issues/5288
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5288/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5288/comments
https://api.github.com/repos/huggingface/datasets/issues/5288/events
https://github.com/huggingface/datasets/issues/5288
1,462,134,067
I_kwDODunzps5XJmUz
5,288
Lossy json serialization - deserialization of dataset info
{'login': 'anuragprat1k', 'id': 57542204, 'node_id': 'MDQ6VXNlcjU3NTQyMjA0', 'avatar_url': 'https://avatars.githubusercontent.com/u/57542204?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/anuragprat1k', 'html_url': 'https://github.com/anuragprat1k', 'followers_url': 'https://api.github.com/users/anuragprat1k/followers', 'following_url': 'https://api.github.com/users/anuragprat1k/following{/other_user}', 'gists_url': 'https://api.github.com/users/anuragprat1k/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/anuragprat1k/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/anuragprat1k/subscriptions', 'organizations_url': 'https://api.github.com/users/anuragprat1k/orgs', 'repos_url': 'https://api.github.com/users/anuragprat1k/repos', 'events_url': 'https://api.github.com/users/anuragprat1k/events{/privacy}', 'received_events_url': 'https://api.github.com/users/anuragprat1k/received_events', 'type': 'User', 'site_admin': False}
[]
open
false
No information
[]
No information
1
2022-11-23T17:20:15Z
2022-11-25T12:53:51Z
No information
NONE
No information
No information
No information
### Describe the bug Saving a dataset to disk as json (using `to_json`) and then loading it again (using `load_dataset`) results in features whose labels are not type-cast correctly. In the code snippet below, `features.label` should have a label of type `ClassLabel` but has type `Value` instead. ### Steps to reproduce the bug ``` from datasets import load_dataset def test_serdes_from_json(d): dataset = load_dataset(d, split="train") dataset.to_json('_test') dataset_loaded = load_dataset("json", data_files='_test', split='train') try: assert dataset_loaded.info.features == dataset.info.features, "features unequal!" except Exception as ex: print(f'{ex}') print(f'expected {dataset.info.features}, \nactual { dataset_loaded.info.features }') test_serdes_from_json('rotten_tomatoes') ``` Output ``` features unequal! expected {'text': Value(dtype='string', id=None), 'label': ClassLabel(names=['neg', 'pos'], id=None)}, actual {'text': Value(dtype='string', id=None), 'label': Value(dtype='int64', id=None)} ``` ### Expected behavior The deserialized `features.label` should have type `ClassLabel`. ### Environment info - `datasets` version: 2.6.1 - Platform: Linux-5.10.144-127.601.amzn2.x86_64-x86_64-with-glibc2.17 - Python version: 3.7.13 - PyArrow version: 7.0.0 - Pandas version: 1.2.3
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5288/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5288/timeline
No information
No information
132
https://api.github.com/repos/huggingface/datasets/issues/5287
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5287/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5287/comments
https://api.github.com/repos/huggingface/datasets/issues/5287/events
https://github.com/huggingface/datasets/pull/5287
1,461,971,889
PR_kwDODunzps5Dkttf
5,287
Fix methods using `IterableDataset.map` that lead to `features=None`
{'login': 'alvarobartt', 'id': 36760800, 'node_id': 'MDQ6VXNlcjM2NzYwODAw', 'avatar_url': 'https://avatars.githubusercontent.com/u/36760800?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/alvarobartt', 'html_url': 'https://github.com/alvarobartt', 'followers_url': 'https://api.github.com/users/alvarobartt/followers', 'following_url': 'https://api.github.com/users/alvarobartt/following{/other_user}', 'gists_url': 'https://api.github.com/users/alvarobartt/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/alvarobartt/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/alvarobartt/subscriptions', 'organizations_url': 'https://api.github.com/users/alvarobartt/orgs', 'repos_url': 'https://api.github.com/users/alvarobartt/repos', 'events_url': 'https://api.github.com/users/alvarobartt/events{/privacy}', 'received_events_url': 'https://api.github.com/users/alvarobartt/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
7
2022-11-23T15:33:25Z
2022-11-28T15:43:14Z
2022-11-28T12:53:22Z
CONTRIBUTOR
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5287', 'html_url': 'https://github.com/huggingface/datasets/pull/5287', 'diff_url': 'https://github.com/huggingface/datasets/pull/5287.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5287.patch', 'merged_at': '2022-11-28T12:53:22Z'}
As currently `IterableDataset.map` is setting the `info.features` to `None` every time as we don't know the output of the dataset in advance, `IterableDataset` methods such as `rename_column`, `rename_columns`, and `remove_columns`. that internally use `map` lead to the features being `None`. This PR is related to #3888, #5245, and #5284 ## βœ… Current solution The code in this PR is basically making sure that if the features were there since the beginning and a `rename_column`/`rename_columns` happens, those are kept and the rename is applied to the `Features` too. Also, if the features were not there before applying `rename_column`, `rename_columns` or `remove_columns`, a batch is prefetched and the features are being inferred (that could potentially be part of `IterableDataset.__init__` in case the `info.features` value is `None`). ## πŸ’‘ Ideas Some ideas were proposed in https://github.com/huggingface/datasets/issues/3888, but probably the most consistent solution even though it may take some time is to actually do the type inferencing during the `IterableDataset.__init__` in case the provided `info.features` is `None`, otherwise, we can just use the provided features. Additionally, as mentioned at https://github.com/huggingface/datasets/issues/3888, we could also include a `features` parameter to the `map` function, but that's probably more tedious. Also thanks to @lhoestq for sharing some ideas in both https://github.com/huggingface/datasets/issues/3888 and https://github.com/huggingface/datasets/issues/5245 :hugs:
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5287/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5287/timeline
No information
No information
133
https://api.github.com/repos/huggingface/datasets/issues/5286
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5286/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5286/comments
https://api.github.com/repos/huggingface/datasets/issues/5286/events
https://github.com/huggingface/datasets/issues/5286
1,461,908,087
I_kwDODunzps5XIvJ3
5,286
FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/enwiki/20220301/dumpstatus.json
{'login': 'roritol', 'id': 32490135, 'node_id': 'MDQ6VXNlcjMyNDkwMTM1', 'avatar_url': 'https://avatars.githubusercontent.com/u/32490135?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/roritol', 'html_url': 'https://github.com/roritol', 'followers_url': 'https://api.github.com/users/roritol/followers', 'following_url': 'https://api.github.com/users/roritol/following{/other_user}', 'gists_url': 'https://api.github.com/users/roritol/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/roritol/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/roritol/subscriptions', 'organizations_url': 'https://api.github.com/users/roritol/orgs', 'repos_url': 'https://api.github.com/users/roritol/repos', 'events_url': 'https://api.github.com/users/roritol/events{/privacy}', 'received_events_url': 'https://api.github.com/users/roritol/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
1
2022-11-23T14:54:15Z
2022-11-25T11:33:14Z
2022-11-25T11:33:14Z
NONE
No information
No information
No information
### Describe the bug I follow the steps provided on the website [https://huggingface.co/datasets/wikipedia](https://huggingface.co/datasets/wikipedia) $ pip install apache_beam mwparserfromhell >>> from datasets import load_dataset >>> load_dataset("wikipedia", "20220301.en") however this results in the following error: raise MissingBeamOptions( datasets.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/ If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory). Example of usage: `load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner')` If I then prompt the system with: >>> load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner') the following error occurs: raise FileNotFoundError(f"Couldn't find file at {url}") FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/enwiki/20220301/dumpstatus.json Here is the exact code: Python 3.10.6 (main, Nov 2 2022, 18:53:38) [GCC 11.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from datasets import load_dataset >>> load_dataset('wikipedia', '20220301.en') Downloading and preparing dataset wikipedia/20220301.en to /home/[EDITED]/.cache/huggingface/datasets/wikipedia/20220301.en/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559... Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 15.3k/15.3k [00:00<00:00, 22.2MB/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 1741, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 822, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1879, in _download_and_prepare raise MissingBeamOptions( datasets.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/ If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory). Example of usage: `load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner')` >>> load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner') Downloading and preparing dataset wikipedia/20220301.en to /home/[EDITED]/.cache/huggingface/datasets/wikipedia/20220301.en/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559... Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 15.3k/15.3k [00:00<00:00, 18.8MB/s] Downloading data files: 0%| | 0/1 [00:00<?, ?it/s]Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 1741, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 822, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1909, in _download_and_prepare super()._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 891, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/rorytol/.cache/huggingface/modules/datasets_modules/datasets/wikipedia/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559/wikipedia.py", line 945, in _split_generators downloaded_files = dl_manager.download_and_extract({"info": info_url}) File "/usr/local/lib/python3.10/dist-packages/datasets/download/download_manager.py", line 447, in download_and_extract return self.extract(self.download(url_or_urls)) File "/usr/local/lib/python3.10/dist-packages/datasets/download/download_manager.py", line 311, in download downloaded_path_or_paths = map_nested( File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 444, in map_nested mapped = [ File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 445, in <listcomp> _single_map_nested((function, obj, types, None, True, None)) File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 346, in _single_map_nested return function(data_struct) File "/usr/local/lib/python3.10/dist-packages/datasets/download/download_manager.py", line 338, in _download return cached_path(url_or_filename, download_config=download_config) File "/usr/local/lib/python3.10/dist-packages/datasets/utils/file_utils.py", line 183, in cached_path output_path = get_from_cache( File "/usr/local/lib/python3.10/dist-packages/datasets/utils/file_utils.py", line 530, in get_from_cache raise FileNotFoundError(f"Couldn't find file at {url}") FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/enwiki/20220301/dumpstatus.json ### Steps to reproduce the bug $ pip install apache_beam mwparserfromhell >>> from datasets import load_dataset >>> load_dataset("wikipedia", "20220301.en") >>> load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner') ### Expected behavior Download the dataset ### Environment info Running linux on a remote workstation operated through a macbook terminal Python 3.10.6
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5286/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5286/timeline
No information
completed
134
https://api.github.com/repos/huggingface/datasets/issues/5285
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5285/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5285/comments
https://api.github.com/repos/huggingface/datasets/issues/5285/events
https://github.com/huggingface/datasets/pull/5285
1,461,521,215
PR_kwDODunzps5DjLgG
5,285
Save file name in embed_storage
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
2
2022-11-23T10:55:54Z
2022-11-24T14:11:41Z
2022-11-24T14:08:37Z
MEMBER
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5285', 'html_url': 'https://github.com/huggingface/datasets/pull/5285', 'diff_url': 'https://github.com/huggingface/datasets/pull/5285.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5285.patch', 'merged_at': '2022-11-24T14:08:37Z'}
Having the file name is useful in case we need to check the extension of the file (e.g. mp3), or in general in case it includes some metadata information (track id, image id etc.) Related to https://github.com/huggingface/datasets/issues/5276
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5285/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5285/timeline
No information
No information
135
https://api.github.com/repos/huggingface/datasets/issues/5284
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5284/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5284/comments
https://api.github.com/repos/huggingface/datasets/issues/5284/events
https://github.com/huggingface/datasets/issues/5284
1,461,519,733
I_kwDODunzps5XHQV1
5,284
Features of IterableDataset set to None by remove column
{'login': 'sanchit-gandhi', 'id': 93869735, 'node_id': 'U_kgDOBZhWpw', 'avatar_url': 'https://avatars.githubusercontent.com/u/93869735?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/sanchit-gandhi', 'html_url': 'https://github.com/sanchit-gandhi', 'followers_url': 'https://api.github.com/users/sanchit-gandhi/followers', 'following_url': 'https://api.github.com/users/sanchit-gandhi/following{/other_user}', 'gists_url': 'https://api.github.com/users/sanchit-gandhi/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/sanchit-gandhi/subscriptions', 'organizations_url': 'https://api.github.com/users/sanchit-gandhi/orgs', 'repos_url': 'https://api.github.com/users/sanchit-gandhi/repos', 'events_url': 'https://api.github.com/users/sanchit-gandhi/events{/privacy}', 'received_events_url': 'https://api.github.com/users/sanchit-gandhi/received_events', 'type': 'User', 'site_admin': False}
[{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}, {'id': 3287858981, 'node_id': 'MDU6TGFiZWwzMjg3ODU4OTgx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/streaming', 'name': 'streaming', 'color': 'fef2c0', 'default': False, 'description': ''}]
closed
false
{'login': 'alvarobartt', 'id': 36760800, 'node_id': 'MDQ6VXNlcjM2NzYwODAw', 'avatar_url': 'https://avatars.githubusercontent.com/u/36760800?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/alvarobartt', 'html_url': 'https://github.com/alvarobartt', 'followers_url': 'https://api.github.com/users/alvarobartt/followers', 'following_url': 'https://api.github.com/users/alvarobartt/following{/other_user}', 'gists_url': 'https://api.github.com/users/alvarobartt/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/alvarobartt/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/alvarobartt/subscriptions', 'organizations_url': 'https://api.github.com/users/alvarobartt/orgs', 'repos_url': 'https://api.github.com/users/alvarobartt/repos', 'events_url': 'https://api.github.com/users/alvarobartt/events{/privacy}', 'received_events_url': 'https://api.github.com/users/alvarobartt/received_events', 'type': 'User', 'site_admin': False}
[{'login': 'alvarobartt', 'id': 36760800, 'node_id': 'MDQ6VXNlcjM2NzYwODAw', 'avatar_url': 'https://avatars.githubusercontent.com/u/36760800?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/alvarobartt', 'html_url': 'https://github.com/alvarobartt', 'followers_url': 'https://api.github.com/users/alvarobartt/followers', 'following_url': 'https://api.github.com/users/alvarobartt/following{/other_user}', 'gists_url': 'https://api.github.com/users/alvarobartt/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/alvarobartt/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/alvarobartt/subscriptions', 'organizations_url': 'https://api.github.com/users/alvarobartt/orgs', 'repos_url': 'https://api.github.com/users/alvarobartt/repos', 'events_url': 'https://api.github.com/users/alvarobartt/events{/privacy}', 'received_events_url': 'https://api.github.com/users/alvarobartt/received_events', 'type': 'User', 'site_admin': False}]
No information
8
2022-11-23T10:54:59Z
2022-11-28T15:18:08Z
2022-11-28T12:53:24Z
CONTRIBUTOR
No information
No information
No information
### Describe the bug The `remove_column` method of the IterableDataset sets the dataset features to None. ### Steps to reproduce the bug ```python from datasets import Audio, load_dataset # load LS in streaming mode dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True) # check original features print("Original features: ", dataset.features.keys()) # define features to remove: we KEEP audio and text COLUMNS_TO_REMOVE = ['chapter_id', 'speaker_id', 'file', 'id'] dataset = dataset.remove_columns(COLUMNS_TO_REMOVE) # check processed features, uh-oh! print("Processed features: ", dataset.features) # streaming the first audio sample still works print("First sample:", next(iter(ds))) ``` **Print Output:** ``` Original features: dict_keys(['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id']) Processed features: None First sample: {'audio': {'path': '2277-149896-0000.flac', 'array': array([ 0.00186157, 0.0005188 , 0.00024414, ..., -0.00097656, -0.00109863, -0.00146484]), 'sampling_rate': 16000}, 'text': "HE WAS IN A FEVERED STATE OF MIND OWING TO THE BLIGHT HIS WIFE'S ACTION THREATENED TO CAST UPON HIS ENTIRE FUTURE"} ``` ### Expected behavior The features should be those **not** removed by the `remove_column` method, i.e. audio and text. ### Environment info - `datasets` version: 2.7.1 - Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.15 - PyArrow version: 9.0.0 - Pandas version: 1.3.5 (Running on Google Colab for a blog post: https://colab.research.google.com/drive/1ySCQREPZEl4msLfxb79pYYOWjUZhkr9y#scrollTo=8pRDGiVmH2ml) cc @polinaeterna @lhoestq
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5284/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5284/timeline
No information
completed
136
https://api.github.com/repos/huggingface/datasets/issues/5283
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5283/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5283/comments
https://api.github.com/repos/huggingface/datasets/issues/5283/events
https://github.com/huggingface/datasets/pull/5283
1,460,291,003
PR_kwDODunzps5De5M1
5,283
Release: 2.6.2
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
1
2022-11-22T17:36:24Z
2022-11-22T17:50:12Z
2022-11-22T17:47:02Z
MEMBER
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5283', 'html_url': 'https://github.com/huggingface/datasets/pull/5283', 'diff_url': 'https://github.com/huggingface/datasets/pull/5283.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5283.patch', 'merged_at': '2022-11-22T17:47:02Z'}
No information
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5283/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5283/timeline
No information
No information
137
https://api.github.com/repos/huggingface/datasets/issues/5282
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5282/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5282/comments
https://api.github.com/repos/huggingface/datasets/issues/5282/events
https://github.com/huggingface/datasets/pull/5282
1,460,238,928
PR_kwDODunzps5Det2_
5,282
Release: 2.7.1
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
0
2022-11-22T16:58:54Z
2022-11-22T17:21:28Z
2022-11-22T17:21:27Z
MEMBER
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5282', 'html_url': 'https://github.com/huggingface/datasets/pull/5282', 'diff_url': 'https://github.com/huggingface/datasets/pull/5282.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5282.patch', 'merged_at': '2022-11-22T17:21:27Z'}
No information
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5282/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5282/timeline
No information
No information
138
https://api.github.com/repos/huggingface/datasets/issues/5281
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5281/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5281/comments
https://api.github.com/repos/huggingface/datasets/issues/5281/events
https://github.com/huggingface/datasets/issues/5281
1,459,930,271
I_kwDODunzps5XBMSf
5,281
Support cloud storage in load_dataset
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False}
[{'id': 1935892871, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODcx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/enhancement', 'name': 'enhancement', 'color': 'a2eeef', 'default': True, 'description': 'New feature or request'}]
open
false
No information
[]
No information
1
2022-11-22T14:00:10Z
2022-11-25T15:54:18Z
No information
MEMBER
No information
No information
No information
Would be nice to be able to do ```python data_files=["s3://..."] storage_options = {...} load_dataset(..., data_files=data_files, storage_options=storage_options) ``` or even ```python load_dataset("gs://...") ``` The idea would be to use `fsspec` as in `download_and_prepare` and `save_to_disk`. This has been requested several times already. Some users want to use their data from private cloud storage to train models related: https://github.com/huggingface/datasets/issues/3490 https://github.com/huggingface/datasets/issues/5244 [forum](https://discuss.huggingface.co/t/how-to-use-s3-path-with-load-dataset-with-streaming-true/25739/2)
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5281/reactions', 'total_count': 5, '+1': 2, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 3, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5281/timeline
No information
No information
139
https://api.github.com/repos/huggingface/datasets/issues/5280
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5280/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5280/comments
https://api.github.com/repos/huggingface/datasets/issues/5280/events
https://github.com/huggingface/datasets/issues/5280
1,459,823,179
I_kwDODunzps5XAyJL
5,280
Import error
{'login': 'feketedavid1012', 'id': 40760055, 'node_id': 'MDQ6VXNlcjQwNzYwMDU1', 'avatar_url': 'https://avatars.githubusercontent.com/u/40760055?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/feketedavid1012', 'html_url': 'https://github.com/feketedavid1012', 'followers_url': 'https://api.github.com/users/feketedavid1012/followers', 'following_url': 'https://api.github.com/users/feketedavid1012/following{/other_user}', 'gists_url': 'https://api.github.com/users/feketedavid1012/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/feketedavid1012/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/feketedavid1012/subscriptions', 'organizations_url': 'https://api.github.com/users/feketedavid1012/orgs', 'repos_url': 'https://api.github.com/users/feketedavid1012/repos', 'events_url': 'https://api.github.com/users/feketedavid1012/events{/privacy}', 'received_events_url': 'https://api.github.com/users/feketedavid1012/received_events', 'type': 'User', 'site_admin': False}
[]
open
false
No information
[]
No information
5
2022-11-22T12:56:43Z
2022-11-22T13:57:49Z
No information
NONE
No information
No information
No information
https://github.com/huggingface/datasets/blob/cd3d8e637cfab62d352a3f4e5e60e96597b5f0e9/src/datasets/__init__.py#L28 Hy, I have error at the above line. I have python version 3.8.13, the message says I need python>=3.7, which is True, but I think the if statement not working properly (or the message wrong)
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5280/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5280/timeline
No information
No information
140
https://api.github.com/repos/huggingface/datasets/issues/5279
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5279/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5279/comments
https://api.github.com/repos/huggingface/datasets/issues/5279/events
https://github.com/huggingface/datasets/pull/5279
1,459,635,002
PR_kwDODunzps5Dcoue
5,279
Warn about checksums
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
3
2022-11-22T10:58:48Z
2022-11-23T11:43:50Z
2022-11-23T09:47:02Z
MEMBER
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5279', 'html_url': 'https://github.com/huggingface/datasets/pull/5279', 'diff_url': 'https://github.com/huggingface/datasets/pull/5279.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5279.patch', 'merged_at': '2022-11-23T09:47:01Z'}
It takes a lot of time on big datasets to compute the checksums, we should at least add a warning to notify the user about this step. I also mentioned how to disable it, and added a tqdm bar (delay=5 seconds) cc @ola13
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5279/reactions', 'total_count': 1, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 1, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5279/timeline
No information
No information
141
https://api.github.com/repos/huggingface/datasets/issues/5278
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5278/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5278/comments
https://api.github.com/repos/huggingface/datasets/issues/5278/events
https://github.com/huggingface/datasets/issues/5278
1,459,574,490
I_kwDODunzps5W_1ba
5,278
load_dataset does not read jsonl metadata file properly
{'login': '065294847', 'id': 81414263, 'node_id': 'MDQ6VXNlcjgxNDE0MjYz', 'avatar_url': 'https://avatars.githubusercontent.com/u/81414263?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/065294847', 'html_url': 'https://github.com/065294847', 'followers_url': 'https://api.github.com/users/065294847/followers', 'following_url': 'https://api.github.com/users/065294847/following{/other_user}', 'gists_url': 'https://api.github.com/users/065294847/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/065294847/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/065294847/subscriptions', 'organizations_url': 'https://api.github.com/users/065294847/orgs', 'repos_url': 'https://api.github.com/users/065294847/repos', 'events_url': 'https://api.github.com/users/065294847/events{/privacy}', 'received_events_url': 'https://api.github.com/users/065294847/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
6
2022-11-22T10:24:46Z
2022-11-23T11:38:35Z
2022-11-23T11:38:35Z
NONE
No information
No information
No information
### Describe the bug Hi, I'm following [this page](https://huggingface.co/docs/datasets/image_dataset) to create a dataset of images and captions via an image folder and a metadata.json file, but I can't seem to get the dataloader to recognize the "text" column. It just spits out "image" and "label" as features. Below is code to reproduce my exact example/problem. ### Steps to reproduce the bug ```ruby dataset_link="19Unu89Ih_kP6zsE7f9Mkw8dy3NwHopRF" id = dataset_link output = 'Godardv01.zip' gdown.download(id=id, output=output, quiet=False) ds = load_dataset("imagefolder", data_dir="/kaggle/working/Volumes/TOSHIBA/Godard_imgs/Volumes/TOSHIBA/Godard_imgs/Full/train", split="train", drop_labels=False) print(ds) ``` ### Expected behavior I would expect that it returned "image" and "text" columns from the code above. ### Environment info - `datasets` version: 2.1.0 - Platform: Linux-5.15.65+-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - PyArrow version: 5.0.0 - Pandas version: 1.3.5
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5278/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5278/timeline
No information
completed
142
https://api.github.com/repos/huggingface/datasets/issues/5277
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5277/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5277/comments
https://api.github.com/repos/huggingface/datasets/issues/5277/events
https://github.com/huggingface/datasets/pull/5277
1,459,388,551
PR_kwDODunzps5Dbybu
5,277
Remove YAML integer keys from class_label metadata
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
3
2022-11-22T08:34:07Z
2022-11-22T13:58:26Z
2022-11-22T13:55:49Z
MEMBER
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5277', 'html_url': 'https://github.com/huggingface/datasets/pull/5277', 'diff_url': 'https://github.com/huggingface/datasets/pull/5277.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5277.patch', 'merged_at': '2022-11-22T13:55:49Z'}
Fix partially #5275.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5277/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5277/timeline
No information
No information
143
https://api.github.com/repos/huggingface/datasets/issues/5276
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5276/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5276/comments
https://api.github.com/repos/huggingface/datasets/issues/5276/events
https://github.com/huggingface/datasets/issues/5276
1,459,363,442
I_kwDODunzps5W_B5y
5,276
Bug in downloading common_voice data and snall chunk of it to one's own hub
{'login': 'capsabogdan', 'id': 48530104, 'node_id': 'MDQ6VXNlcjQ4NTMwMTA0', 'avatar_url': 'https://avatars.githubusercontent.com/u/48530104?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/capsabogdan', 'html_url': 'https://github.com/capsabogdan', 'followers_url': 'https://api.github.com/users/capsabogdan/followers', 'following_url': 'https://api.github.com/users/capsabogdan/following{/other_user}', 'gists_url': 'https://api.github.com/users/capsabogdan/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/capsabogdan/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/capsabogdan/subscriptions', 'organizations_url': 'https://api.github.com/users/capsabogdan/orgs', 'repos_url': 'https://api.github.com/users/capsabogdan/repos', 'events_url': 'https://api.github.com/users/capsabogdan/events{/privacy}', 'received_events_url': 'https://api.github.com/users/capsabogdan/received_events', 'type': 'User', 'site_admin': False}
[]
open
false
No information
[]
No information
17
2022-11-22T08:17:53Z
2022-11-30T16:59:49Z
No information
NONE
No information
No information
No information
### Describe the bug I'm trying to load the common voice dataset. Currently there is no implementation to download just par tof the data, and I need just one part of it, without downloading the entire dataset Help please? ![image](https://user-images.githubusercontent.com/48530104/203260511-26df766f-6013-4eaf-be26-8aa13794def2.png) ### Steps to reproduce the bug So here is what I have done: 1. Download common_voice data 2. Trim part of it and publish it to my own repo. 3. Download data from my own repo, but am getting this error. ### Expected behavior There shouldn't be an error in downloading part of the data and publishing it to one's own repo ### Environment info common_voice 11
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5276/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5276/timeline
No information
No information
144
https://api.github.com/repos/huggingface/datasets/issues/5275
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5275/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5275/comments
https://api.github.com/repos/huggingface/datasets/issues/5275/events
https://github.com/huggingface/datasets/issues/5275
1,459,358,919
I_kwDODunzps5W_AzH
5,275
YAML integer keys are not preserved Hub server-side
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}
[{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}]
open
false
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}
[{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}]
No information
0
2022-11-22T08:14:47Z
2022-11-23T08:44:16Z
No information
MEMBER
No information
No information
No information
After an internal discussion (https://github.com/huggingface/moon-landing/issues/4563): - YAML integer keys are not preserved server-side: they are transformed to strings - See for example this Hub PR: https://huggingface.co/datasets/acronym_identification/discussions/1/files - Original: ```yaml class_label: names: 0: B-long 1: B-short ``` - Returned by the server: ```yaml class_label: names: '0': B-long '1': B-short ``` - They are planning to enforce only string keys - Other projects already use interger-transformed-to string keys: e.g. `transformers` models `id2label`: https://huggingface.co/roberta-large-mnli/blob/main/config.json ```yaml "id2label": { "0": "CONTRADICTION", "1": "NEUTRAL", "2": "ENTAILMENT" } ``` On the other hand, at `datasets` we are currently using YAML integer keys for `dataset_info` `class_label`. Please note (thanks @lhoestq for pointing out) that previous versions (2.6 and 2.7) of `datasets` need being patched: ```python In [18]: Features._from_yaml_list([{'dtype': {'class_label': {'names': {'0': 'neg', '1': 'pos'}}}, 'name': 'label'}]) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-18-974f07eea526> in <module> ----> 1 Features._from_yaml_list(ry) ~/Desktop/hf/nlp/src/datasets/features/features.py in _from_yaml_list(cls, yaml_data) 1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}") 1744 -> 1745 return cls.from_dict(from_yaml_inner(yaml_data)) 1746 1747 def encode_example(self, example): ~/Desktop/hf/nlp/src/datasets/features/features.py in from_yaml_inner(obj) 1739 elif isinstance(obj, list): 1740 names = [_feature.pop("name") for _feature in obj] -> 1741 return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)} 1742 else: 1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}") ~/Desktop/hf/nlp/src/datasets/features/features.py in <dictcomp>(.0) 1739 elif isinstance(obj, list): 1740 names = [_feature.pop("name") for _feature in obj] -> 1741 return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)} 1742 else: 1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}") ~/Desktop/hf/nlp/src/datasets/features/features.py in from_yaml_inner(obj) 1734 return {"_type": snakecase_to_camelcase(obj["dtype"])} 1735 else: -> 1736 return from_yaml_inner(obj["dtype"]) 1737 else: 1738 return {"_type": snakecase_to_camelcase(_type), **unsimplify(obj)[_type]} ~/Desktop/hf/nlp/src/datasets/features/features.py in from_yaml_inner(obj) 1736 return from_yaml_inner(obj["dtype"]) 1737 else: -> 1738 return {"_type": snakecase_to_camelcase(_type), **unsimplify(obj)[_type]} 1739 elif isinstance(obj, list): 1740 names = [_feature.pop("name") for _feature in obj] ~/Desktop/hf/nlp/src/datasets/features/features.py in unsimplify(feature) 1704 if isinstance(feature.get("class_label"), dict) and isinstance(feature["class_label"].get("names"), dict): 1705 label_ids = sorted(feature["class_label"]["names"]) -> 1706 if label_ids and label_ids != list(range(label_ids[-1] + 1)): 1707 raise ValueError( 1708 f"ClassLabel expected a value for all label ids [0:{label_ids[-1] + 1}] but some ids are missing." TypeError: can only concatenate str (not "int") to str ``` TODO: - [x] Remove YAML integer keys from `dataset_info` metadata - [x] Make a patch release for affected `datasets` versions: 2.6 and 2.7 - [ ] Communicate on the fix - [ ] Wait for adoption - [ ] Bulk edit the Hub to fix this in all canonical datasets
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5275/reactions', 'total_count': 1, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 1, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5275/timeline
No information
No information
145
https://api.github.com/repos/huggingface/datasets/issues/5274
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5274/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5274/comments
https://api.github.com/repos/huggingface/datasets/issues/5274/events
https://github.com/huggingface/datasets/issues/5274
1,458,646,455
I_kwDODunzps5W8S23
5,274
load_dataset possibly broken for gated datasets?
{'login': 'TristanThrush', 'id': 20826878, 'node_id': 'MDQ6VXNlcjIwODI2ODc4', 'avatar_url': 'https://avatars.githubusercontent.com/u/20826878?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/TristanThrush', 'html_url': 'https://github.com/TristanThrush', 'followers_url': 'https://api.github.com/users/TristanThrush/followers', 'following_url': 'https://api.github.com/users/TristanThrush/following{/other_user}', 'gists_url': 'https://api.github.com/users/TristanThrush/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/TristanThrush/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/TristanThrush/subscriptions', 'organizations_url': 'https://api.github.com/users/TristanThrush/orgs', 'repos_url': 'https://api.github.com/users/TristanThrush/repos', 'events_url': 'https://api.github.com/users/TristanThrush/events{/privacy}', 'received_events_url': 'https://api.github.com/users/TristanThrush/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
6
2022-11-21T21:59:53Z
2022-11-28T02:50:42Z
2022-11-28T02:50:42Z
MEMBER
No information
No information
No information
### Describe the bug When trying to download the [winoground dataset](https://huggingface.co/datasets/facebook/winoground), I get this error unless I roll back the version of huggingface-hub: ``` [/usr/local/lib/python3.7/dist-packages/huggingface_hub/utils/_validators.py](https://localhost:8080/#) in validate_repo_id(repo_id) 165 if repo_id.count("/") > 1: 166 raise HFValidationError( --> 167 "Repo id must be in the form 'repo_name' or 'namespace/repo_name':" 168 f" '{repo_id}'. Use `repo_type` argument if needed." 169 ) HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': 'datasets/facebook/winoground'. Use `repo_type` argument if needed ``` ### Steps to reproduce the bug Install requirements: ``` pip install transformers pip install datasets # It works if you uncomment the following line, rolling back huggingface hub: # pip install huggingface-hub==0.10.1 ``` Then: ``` from datasets import load_dataset auth_token = "" # Replace with an auth token, which you can get from your huggingface account: Profile -> Settings -> Access Tokens -> New Token winoground = load_dataset("facebook/winoground", use_auth_token=auth_token)["test"] ``` ### Expected behavior Downloading of the datset ### Environment info Just a google colab; see here: https://colab.research.google.com/drive/15wwOSte2CjTazdnCWYUm2VPlFbk2NGc0?usp=sharing
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5274/reactions', 'total_count': 1, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 1}
https://api.github.com/repos/huggingface/datasets/issues/5274/timeline
No information
completed
146
https://api.github.com/repos/huggingface/datasets/issues/5273
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5273/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5273/comments
https://api.github.com/repos/huggingface/datasets/issues/5273/events
https://github.com/huggingface/datasets/issues/5273
1,458,018,050
I_kwDODunzps5W55cC
5,273
download_mode="force_redownload" does not refresh cached dataset
{'login': 'nomisto', 'id': 28439912, 'node_id': 'MDQ6VXNlcjI4NDM5OTEy', 'avatar_url': 'https://avatars.githubusercontent.com/u/28439912?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/nomisto', 'html_url': 'https://github.com/nomisto', 'followers_url': 'https://api.github.com/users/nomisto/followers', 'following_url': 'https://api.github.com/users/nomisto/following{/other_user}', 'gists_url': 'https://api.github.com/users/nomisto/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/nomisto/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/nomisto/subscriptions', 'organizations_url': 'https://api.github.com/users/nomisto/orgs', 'repos_url': 'https://api.github.com/users/nomisto/repos', 'events_url': 'https://api.github.com/users/nomisto/events{/privacy}', 'received_events_url': 'https://api.github.com/users/nomisto/received_events', 'type': 'User', 'site_admin': False}
[]
open
false
No information
[]
No information
0
2022-11-21T14:12:43Z
2022-11-21T14:13:03Z
No information
NONE
No information
No information
No information
### Describe the bug `load_datasets` does not refresh dataset when features are imported from external file, even with `download_mode="force_redownload"`. The bug is not limited to nested fields, however it is more likely to occur with nested fields. ### Steps to reproduce the bug To reproduce the bug 3 files are needed: `dataset.py` (contains dataset loading script), `schema.py` (contains features of dataset) and `main.py` (to run `load_datasets`) `dataset.py` ```python import datasets from schema import features class NewDataset(datasets.GeneratorBasedBuilder): def _info(self): return datasets.DatasetInfo( features=features ) def _split_generators(self, dl_manager): return [ datasets.SplitGenerator( name=datasets.Split.TRAIN ) ] def _generate_examples(self): data = [ {"id": 0, "nested": []}, {"id": 1, "nested": []} ] for key, example in enumerate(data): yield key, example ``` `schema.py` ```python import datasets features = datasets.Features( { "id": datasets.Value("int32"), "nested": [ {"text": datasets.Value("string")} ] } ) ``` `main.py` ```python import datasets a = datasets.load_dataset("dataset.py") print(a["train"].info.features) ``` Now if `main.py` is run it prints the following correct output: `{'id': Value(dtype='int32', id=None), 'nested': [{'text': Value(dtype='string', id=None)}]}`. However, if f.e. the label of the feature "text" is changed to something else, f.e. to `schema.py` ```python import datasets features = datasets.Features( { "id": datasets.Value("int32"), "nested": [ {"textfoo": datasets.Value("string")} ] } ) ``` `main.py` still prints `{'id': Value(dtype='int32', id=None), 'nested': [{'text': Value(dtype='string', id=None)}]}`, even if run with `download_mode="force_redownload"`. The only fix is to delete the folder in the cache. ### Expected behavior The cached dataset is deleted and refreshed when using `load_datasets` with `download_mode="force_redownload"`. ### Environment info - `datasets` version: 2.7.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.7.9 - PyArrow version: 10.0.0 - Pandas version: 1.3.5
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5273/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5273/timeline
No information
No information
147
https://api.github.com/repos/huggingface/datasets/issues/5272
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5272/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5272/comments
https://api.github.com/repos/huggingface/datasets/issues/5272/events
https://github.com/huggingface/datasets/issues/5272
1,456,940,021
I_kwDODunzps5W1yP1
5,272
Use pyarrow Tensor dtype
{'login': 'franz101', 'id': 18228395, 'node_id': 'MDQ6VXNlcjE4MjI4Mzk1', 'avatar_url': 'https://avatars.githubusercontent.com/u/18228395?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/franz101', 'html_url': 'https://github.com/franz101', 'followers_url': 'https://api.github.com/users/franz101/followers', 'following_url': 'https://api.github.com/users/franz101/following{/other_user}', 'gists_url': 'https://api.github.com/users/franz101/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/franz101/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/franz101/subscriptions', 'organizations_url': 'https://api.github.com/users/franz101/orgs', 'repos_url': 'https://api.github.com/users/franz101/repos', 'events_url': 'https://api.github.com/users/franz101/events{/privacy}', 'received_events_url': 'https://api.github.com/users/franz101/received_events', 'type': 'User', 'site_admin': False}
[{'id': 1935892871, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODcx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/enhancement', 'name': 'enhancement', 'color': 'a2eeef', 'default': True, 'description': 'New feature or request'}]
open
false
No information
[]
No information
6
2022-11-20T15:18:41Z
2022-11-21T17:57:55Z
No information
NONE
No information
No information
No information
### Feature request I was going the discussion of converting tensors to lists. Is there a way to leverage pyarrow's Tensors for nested arrays / embeddings? For example: ```python import pyarrow as pa import numpy as np x = np.array([[2, 2, 4], [4, 5, 100]], np.int32) pa.Tensor.from_numpy(x, dim_names=["dim1","dim2"]) ``` [Apache docs](https://arrow.apache.org/docs/python/generated/pyarrow.Tensor.html) Maybe this belongs into the pyarrow features / repo. ### Motivation Working with big data, we need to make sure to use the best data structures and IO out there ### Your contribution Can try to a PR if code changes necessary
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5272/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5272/timeline
No information
No information
148
https://api.github.com/repos/huggingface/datasets/issues/5271
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5271/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5271/comments
https://api.github.com/repos/huggingface/datasets/issues/5271/events
https://github.com/huggingface/datasets/pull/5271
1,456,807,738
PR_kwDODunzps5DTDX1
5,271
Fix #5269
{'login': 'Freed-Wu', 'id': 32936898, 'node_id': 'MDQ6VXNlcjMyOTM2ODk4', 'avatar_url': 'https://avatars.githubusercontent.com/u/32936898?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/Freed-Wu', 'html_url': 'https://github.com/Freed-Wu', 'followers_url': 'https://api.github.com/users/Freed-Wu/followers', 'following_url': 'https://api.github.com/users/Freed-Wu/following{/other_user}', 'gists_url': 'https://api.github.com/users/Freed-Wu/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/Freed-Wu/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/Freed-Wu/subscriptions', 'organizations_url': 'https://api.github.com/users/Freed-Wu/orgs', 'repos_url': 'https://api.github.com/users/Freed-Wu/repos', 'events_url': 'https://api.github.com/users/Freed-Wu/events{/privacy}', 'received_events_url': 'https://api.github.com/users/Freed-Wu/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
1
2022-11-20T07:50:49Z
2022-11-21T15:07:19Z
2022-11-21T15:06:38Z
NONE
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5271', 'html_url': 'https://github.com/huggingface/datasets/pull/5271', 'diff_url': 'https://github.com/huggingface/datasets/pull/5271.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5271.patch', 'merged_at': None}
``` $ datasets-cli convert --datasets_directory <TAB> datasets_directory benchmarks/ docs/ metrics/ notebooks/ src/ templates/ tests/ utils/ ```
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5271/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5271/timeline
No information
No information
149
https://api.github.com/repos/huggingface/datasets/issues/5270
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5270/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5270/comments
https://api.github.com/repos/huggingface/datasets/issues/5270/events
https://github.com/huggingface/datasets/issues/5270
1,456,508,990
I_kwDODunzps5W0JA-
5,270
When len(_URLS) > 16, download will hang
{'login': 'Freed-Wu', 'id': 32936898, 'node_id': 'MDQ6VXNlcjMyOTM2ODk4', 'avatar_url': 'https://avatars.githubusercontent.com/u/32936898?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/Freed-Wu', 'html_url': 'https://github.com/Freed-Wu', 'followers_url': 'https://api.github.com/users/Freed-Wu/followers', 'following_url': 'https://api.github.com/users/Freed-Wu/following{/other_user}', 'gists_url': 'https://api.github.com/users/Freed-Wu/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/Freed-Wu/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/Freed-Wu/subscriptions', 'organizations_url': 'https://api.github.com/users/Freed-Wu/orgs', 'repos_url': 'https://api.github.com/users/Freed-Wu/repos', 'events_url': 'https://api.github.com/users/Freed-Wu/events{/privacy}', 'received_events_url': 'https://api.github.com/users/Freed-Wu/received_events', 'type': 'User', 'site_admin': False}
[]
open
false
No information
[]
No information
7
2022-11-19T14:27:41Z
2022-11-21T15:27:16Z
No information
NONE
No information
No information
No information
### Describe the bug ```python In [9]: dataset = load_dataset('Freed-Wu/kodak', split='test') Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2.53k/2.53k [00:00<00:00, 1.88MB/s] [11/19/22 22:16:21] WARNING Using custom data configuration default builder.py:379 Downloading and preparing dataset kodak/default to /home/wzy/.cache/huggingface/datasets/Freed-Wu___kodak/default/0.0.1/bd1cc3434212e3e654f7e16ad618f8a1470b5982b086c91b1d6bc7187183c6e9... Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 531k/531k [00:02<00:00, 239kB/s] #10: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:04<00:00, 4.06s/obj] Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 534k/534k [00:02<00:00, 193kB/s] #14: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:04<00:00, 4.37s/obj] Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 692k/692k [00:02<00:00, 269kB/s] #12: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:04<00:00, 4.44s/obj] Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 566k/566k [00:02<00:00, 210kB/s] #5: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:04<00:00, 4.53s/obj] Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 613k/613k [00:02<00:00, 235kB/s] #13: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:04<00:00, 4.53s/obj] Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 786k/786k [00:02<00:00, 342kB/s] #3: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:04<00:00, 4.60s/obj] Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 619k/619k [00:02<00:00, 254kB/s] #4: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:04<00:00, 4.68s/obj] Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 737k/737k [00:02<00:00, 271kB/s] Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 788k/788k [00:02<00:00, 285kB/s] #6: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:05<00:00, 5.04s/obj] Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 618k/618k [00:04<00:00, 153kB/s] #0: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:11<00:00, 5.69s/obj] ^CProcess ForkPoolWorker-47: Process ForkPoolWorker-46: Process ForkPoolWorker-36: Process ForkPoolWorker-38:β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:05<00:00, 5.04s/obj] Process ForkPoolWorker-37: Process ForkPoolWorker-45: Process ForkPoolWorker-39: Process ForkPoolWorker-43: Process ForkPoolWorker-33: Process ForkPoolWorker-18: Traceback (most recent call last): Traceback (most recent call last): Traceback (most recent call last): Traceback (most recent call last): Traceback (most recent call last): File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker task = get() File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/queues.py", line 364, in get with self._rlock: File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__ return self._semlock.__enter__() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker task = get() File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker task = get() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/queues.py", line 364, in get with self._rlock: File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker task = get() File "/usr/lib/python3.10/multiprocessing/queues.py", line 364, in get with self._rlock: KeyboardInterrupt File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__ return self._semlock.__enter__() Traceback (most recent call last): Traceback (most recent call last): Traceback (most recent call last): KeyboardInterrupt File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker task = get() File "/usr/lib/python3.10/multiprocessing/queues.py", line 364, in get with self._rlock: File "/usr/lib/python3.10/multiprocessing/queues.py", line 364, in get with self._rlock: File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__ return self._semlock.__enter__() File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__ return self._semlock.__enter__() KeyboardInterrupt File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() KeyboardInterrupt File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker task = get() File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__ return self._semlock.__enter__() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker task = get() File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker task = get() File "/usr/lib/python3.10/multiprocessing/queues.py", line 364, in get with self._rlock: File "/usr/lib/python3.10/multiprocessing/queues.py", line 365, in get res = self._reader.recv_bytes() File "/usr/lib/python3.10/multiprocessing/queues.py", line 364, in get with self._rlock: File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__ return self._semlock.__enter__() KeyboardInterrupt File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__ return self._semlock.__enter__() File "/usr/lib/python3.10/multiprocessing/connection.py", line 221, in recv_bytes buf = self._recv_bytes(maxlength) KeyboardInterrupt KeyboardInterrupt File "/usr/lib/python3.10/multiprocessing/connection.py", line 419, in _recv_bytes buf = self._recv(4) File "/usr/lib/python3.10/multiprocessing/connection.py", line 384, in _recv chunk = read(handle, remaining) KeyboardInterrupt Traceback (most recent call last): File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker task = get() File "/usr/lib/python3.10/multiprocessing/queues.py", line 364, in get with self._rlock: File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__ return self._semlock.__enter__() KeyboardInterrupt Process ForkPoolWorker-20: Process ForkPoolWorker-44: Process ForkPoolWorker-22: Traceback (most recent call last): File "/usr/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/usr/lib/python3.10/multiprocessing/pool.py", line 48, in mapstar return list(map(*args)) File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 197, in _single_map_nested return function(data_struct) File "/usr/lib/python3.10/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 298, in cached_path output_path = get_from_cache( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 561, in get_from_cache response = http_head( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 476, in http_head response = _request_with_retry( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 405, in _request_with_retry response = requests.request(method=method.upper(), url=url, timeout=timeout, **params) File "/usr/lib/python3.10/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 587, in request resp = self.send(prep, **send_kwargs) File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 701, in send r = adapter.send(request, **kwargs) File "/usr/lib/python3.10/site-packages/requests/adapters.py", line 489, in send resp = conn.urlopen( File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen httplib_response = self._make_request( File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request self._validate_conn(conn) File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn conn.connect() File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 358, in connect self.sock = conn = self._new_conn() File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn conn = connection.create_connection( File "/usr/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection sock.connect(sa) KeyboardInterrupt #1: 0%| | 0/2 [03:00<?, ?obj/s] Traceback (most recent call last): Traceback (most recent call last): File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/usr/lib/python3.10/multiprocessing/pool.py", line 48, in mapstar return list(map(*args)) File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 197, in _single_map_nested return function(data_struct) File "/usr/lib/python3.10/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 298, in cached_path output_path = get_from_cache( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 659, in get_from_cache http_get( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 442, in http_get response = _request_with_retry( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 405, in _request_with_retry response = requests.request(method=method.upper(), url=url, timeout=timeout, **params) File "/usr/lib/python3.10/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 587, in request resp = self.send(prep, **send_kwargs) File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 701, in send r = adapter.send(request, **kwargs) File "/usr/lib/python3.10/site-packages/requests/adapters.py", line 489, in send resp = conn.urlopen( File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen httplib_response = self._make_request( File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request self._validate_conn(conn) File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn conn.connect() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 358, in connect self.sock = conn = self._new_conn() File "/usr/lib/python3.10/multiprocessing/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn conn = connection.create_connection( File "/usr/lib/python3.10/multiprocessing/pool.py", line 48, in mapstar return list(map(*args)) File "/usr/lib/python3.10/site-packages/urllib3/util/connection.py", line 72, in create_connection for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/usr/lib/python3.10/socket.py", line 955, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 197, in _single_map_nested return function(data_struct) File "/usr/lib/python3.10/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) KeyboardInterrupt File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 298, in cached_path output_path = get_from_cache( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 561, in get_from_cache response = http_head( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 476, in http_head response = _request_with_retry( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 405, in _request_with_retry response = requests.request(method=method.upper(), url=url, timeout=timeout, **params) File "/usr/lib/python3.10/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 587, in request resp = self.send(prep, **send_kwargs) File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 701, in send r = adapter.send(request, **kwargs) File "/usr/lib/python3.10/site-packages/requests/adapters.py", line 489, in send resp = conn.urlopen( File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen httplib_response = self._make_request( File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request self._validate_conn(conn) File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn conn.connect() File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 358, in connect self.sock = conn = self._new_conn() File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn conn = connection.create_connection( File "/usr/lib/python3.10/site-packages/urllib3/util/connection.py", line 72, in create_connection for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): File "/usr/lib/python3.10/socket.py", line 955, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): KeyboardInterrupt #3: 0%| | 0/2 [03:00<?, ?obj/s] #11: 0%| | 0/1 [00:49<?, ?obj/s] Traceback (most recent call last): File "/usr/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/usr/lib/python3.10/multiprocessing/pool.py", line 48, in mapstar return list(map(*args)) File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 197, in _single_map_nested return function(data_struct) File "/usr/lib/python3.10/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 298, in cached_path output_path = get_from_cache( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 561, in get_from_cache response = http_head( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 476, in http_head response = _request_with_retry( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 405, in _request_with_retry response = requests.request(method=method.upper(), url=url, timeout=timeout, **params) File "/usr/lib/python3.10/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 587, in request resp = self.send(prep, **send_kwargs) File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 723, in send history = [resp for resp in gen] File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 723, in <listcomp> history = [resp for resp in gen] File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 266, in resolve_redirects resp = self.send( File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 701, in send r = adapter.send(request, **kwargs) File "/usr/lib/python3.10/site-packages/requests/adapters.py", line 489, in send resp = conn.urlopen( File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen httplib_response = self._make_request( File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request self._validate_conn(conn) File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn conn.connect() File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 358, in connect self.sock = conn = self._new_conn() File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn conn = connection.create_connection( File "/usr/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection sock.connect(sa) KeyboardInterrupt #5: 0%| | 0/1 [03:00<?, ?obj/s] KeyboardInterrupt Process ForkPoolWorker-42: Traceback (most recent call last): File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/multiprocessing/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/usr/lib/python3.10/multiprocessing/pool.py", line 48, in mapstar return list(map(*args)) File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 197, in _single_map_nested return function(data_struct) File "/usr/lib/python3.10/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 298, in cached_path output_path = get_from_cache( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 561, in get_from_cache response = http_head( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 476, in http_head response = _request_with_retry( File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 405, in _request_with_retry response = requests.request(method=method.upper(), url=url, timeout=timeout, **params) File "/usr/lib/python3.10/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 587, in request resp = self.send(prep, **send_kwargs) File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 701, in send r = adapter.send(request, **kwargs) File "/usr/lib/python3.10/site-packages/requests/adapters.py", line 489, in send resp = conn.urlopen( File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen httplib_response = self._make_request( File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request self._validate_conn(conn) File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn conn.connect() File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 358, in connect self.sock = conn = self._new_conn() File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn conn = connection.create_connection( File "/usr/lib/python3.10/site-packages/urllib3/util/connection.py", line 72, in create_connection for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): File "/usr/lib/python3.10/socket.py", line 955, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): KeyboardInterrupt #9: 0%| | 0/1 [00:51<?, ?obj/s] ``` ### Steps to reproduce the bug ```python """Kodak. Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. """ import datasets NUMBER = 17 _DESCRIPTION = """\ The pictures below link to lossless, true color (24 bits per pixel, aka "full color") images. It is my understanding they have been released by the Eastman Kodak Company for unrestricted usage. Many sites use them as a standard test suite for compression testing, etc. Prior to this site, they were only available in the Sun Raster format via ftp. This meant that the images could not be previewed before downloading. Since their release, however, the lossless PNG format has been incorporated into all the major browsers. Since PNG supports 24-bit lossless color (which GIF and JPEG do not), it is now possible to offer this browser-friendly access to the images. """ _HOMEPAGE = "https://r0k.us/graphics/kodak/" _LICENSE = "GPLv3" _URLS = [ f"https://github.com/MohamedBakrAli/Kodak-Lossless-True-Color-Image-Suite/raw/master/PhotoCD_PCD0992/{i}.png" for i in range(1, 1 + NUMBER) ] class Kodak(datasets.GeneratorBasedBuilder): """Kodak datasets.""" VERSION = datasets.Version("0.0.1") def _info(self): features = datasets.Features( { "image": datasets.Image(), } ) return datasets.DatasetInfo( description=_DESCRIPTION, features=features, homepage=_HOMEPAGE, license=_LICENSE, ) def _split_generators(self, dl_manager): """Return SplitGenerators.""" file_paths = dl_manager.download_and_extract(_URLS) return [ datasets.SplitGenerator( name=datasets.Split.TEST, gen_kwargs={ "file_paths": file_paths, }, ), ] def _generate_examples(self, file_paths): """Yield examples.""" for file_path in file_paths: yield file_path, {"image": file_path} ``` ### Expected behavior When `len(_URLS) < 16`, it works. ```python In [3]: dataset = load_dataset('Freed-Wu/kodak', split='test') Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2.53k/2.53k [00:00<00:00, 3.02MB/s] [11/19/22 22:04:28] WARNING Using custom data configuration default builder.py:379 Downloading and preparing dataset kodak/default to /home/wzy/.cache/huggingface/datasets/Freed-Wu___kodak/default/0.0.1/d26017602a592b5bfa7e008127cdf9dec5af220c9068005f1b4eda036031f475... Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 593k/593k [00:00<00:00, 2.88MB/s] Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 621k/621k [00:03<00:00, 166kB/s] Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 531k/531k [00:01<00:00, 366kB/s] 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 16/16 [00:13<00:00, 1.18it/s] 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 16/16 [00:00<00:00, 3832.38it/s] Dataset kodak downloaded and prepared to /home/wzy/.cache/huggingface/datasets/Freed-Wu___kodak/default/0.0.1/d26017602a592b5bfa7e008127cdf9dec5af220c9068005f1b4eda036031f475. Subsequent calls will reuse this data. ``` ### Environment info - `datasets` version: 2.7.0 - Platform: Linux-6.0.8-arch1-1-x86_64-with-glibc2.36 - Python version: 3.10.8 - PyArrow version: 9.0.0 - Pandas version: 1.4.4
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5270/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5270/timeline
No information
No information
150
https://api.github.com/repos/huggingface/datasets/issues/5269
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5269/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5269/comments
https://api.github.com/repos/huggingface/datasets/issues/5269/events
https://github.com/huggingface/datasets/issues/5269
1,456,485,799
I_kwDODunzps5W0DWn
5,269
Shell completions
{'login': 'Freed-Wu', 'id': 32936898, 'node_id': 'MDQ6VXNlcjMyOTM2ODk4', 'avatar_url': 'https://avatars.githubusercontent.com/u/32936898?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/Freed-Wu', 'html_url': 'https://github.com/Freed-Wu', 'followers_url': 'https://api.github.com/users/Freed-Wu/followers', 'following_url': 'https://api.github.com/users/Freed-Wu/following{/other_user}', 'gists_url': 'https://api.github.com/users/Freed-Wu/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/Freed-Wu/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/Freed-Wu/subscriptions', 'organizations_url': 'https://api.github.com/users/Freed-Wu/orgs', 'repos_url': 'https://api.github.com/users/Freed-Wu/repos', 'events_url': 'https://api.github.com/users/Freed-Wu/events{/privacy}', 'received_events_url': 'https://api.github.com/users/Freed-Wu/received_events', 'type': 'User', 'site_admin': False}
[{'id': 1935892871, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODcx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/enhancement', 'name': 'enhancement', 'color': 'a2eeef', 'default': True, 'description': 'New feature or request'}]
closed
false
No information
[]
No information
2
2022-11-19T13:48:59Z
2022-11-21T15:06:15Z
2022-11-21T15:06:14Z
NONE
No information
No information
No information
### Feature request Like <https://github.com/huggingface/huggingface_hub/issues/1197>, datasets-cli maybe need it, too. ### Motivation See above. ### Your contribution Maybe.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5269/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5269/timeline
No information
completed
151
https://api.github.com/repos/huggingface/datasets/issues/5268
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5268/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5268/comments
https://api.github.com/repos/huggingface/datasets/issues/5268/events
https://github.com/huggingface/datasets/pull/5268
1,455,633,978
PR_kwDODunzps5DPIsp
5,268
Sharded save_to_disk + multiprocessing
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False}
[]
open
false
No information
[]
No information
1
2022-11-18T18:50:01Z
2022-11-30T13:06:13Z
No information
MEMBER
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5268', 'html_url': 'https://github.com/huggingface/datasets/pull/5268', 'diff_url': 'https://github.com/huggingface/datasets/pull/5268.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5268.patch', 'merged_at': None}
Added `num_shards=` and `num_proc=` to `save_to_disk()` I also: - deprecated the fs parameter in favor of storage_options (for consistency with the rest of the lib) in save_to_disk and load_from_disk - always embed the image/audio data in arrow when doing `save_to_disk` - added a tqdm bar in `save_to_disk` - Use the MockFileSystem in tests for `save_to_disk` and `load_from_disk` - removed the unused integration tests with S3, since we can now test with `mockfs` instead of `s3fs` TODO: - [x] implem save_to_disk for dataset dict - [x] save_to_disk for dataset dict tests - [x] deprecate fs in dataset dict load_from_disk as well - [x] update docs Close #5263 Close https://github.com/huggingface/datasets/issues/4196 Close https://github.com/huggingface/datasets/issues/4351
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5268/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5268/timeline
No information
No information
152
https://api.github.com/repos/huggingface/datasets/issues/5267
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5267/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5267/comments
https://api.github.com/repos/huggingface/datasets/issues/5267/events
https://github.com/huggingface/datasets/pull/5267
1,455,466,464
PR_kwDODunzps5DOlFR
5,267
Fix `max_shard_size` docs
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
1
2022-11-18T16:55:22Z
2022-11-18T17:28:58Z
2022-11-18T17:25:27Z
MEMBER
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5267', 'html_url': 'https://github.com/huggingface/datasets/pull/5267', 'diff_url': 'https://github.com/huggingface/datasets/pull/5267.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5267.patch', 'merged_at': '2022-11-18T17:25:26Z'}
No information
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5267/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5267/timeline
No information
No information
153
https://api.github.com/repos/huggingface/datasets/issues/5266
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5266/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5266/comments
https://api.github.com/repos/huggingface/datasets/issues/5266/events
https://github.com/huggingface/datasets/pull/5266
1,455,281,310
PR_kwDODunzps5DN9BT
5,266
Specify arguments as keywords in librosa.reshape to avoid future errors
{'login': 'polinaeterna', 'id': 16348744, 'node_id': 'MDQ6VXNlcjE2MzQ4NzQ0', 'avatar_url': 'https://avatars.githubusercontent.com/u/16348744?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/polinaeterna', 'html_url': 'https://github.com/polinaeterna', 'followers_url': 'https://api.github.com/users/polinaeterna/followers', 'following_url': 'https://api.github.com/users/polinaeterna/following{/other_user}', 'gists_url': 'https://api.github.com/users/polinaeterna/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/polinaeterna/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/polinaeterna/subscriptions', 'organizations_url': 'https://api.github.com/users/polinaeterna/orgs', 'repos_url': 'https://api.github.com/users/polinaeterna/repos', 'events_url': 'https://api.github.com/users/polinaeterna/events{/privacy}', 'received_events_url': 'https://api.github.com/users/polinaeterna/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
1
2022-11-18T14:58:47Z
2022-11-21T15:45:02Z
2022-11-21T15:41:57Z
CONTRIBUTOR
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5266', 'html_url': 'https://github.com/huggingface/datasets/pull/5266', 'diff_url': 'https://github.com/huggingface/datasets/pull/5266.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5266.patch', 'merged_at': '2022-11-21T15:41:57Z'}
Fixes a warning and future deprecation from `librosa.reshape`: ``` FutureWarning: Pass orig_sr=16000, target_sr=48000 as keyword args. From version 0.10 passing these as positional arguments will result in an error array = librosa.resample(array, sampling_rate, self.sampling_rate, res_type="kaiser_best") ```
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5266/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5266/timeline
No information
No information
154
https://api.github.com/repos/huggingface/datasets/issues/5265
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5265/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5265/comments
https://api.github.com/repos/huggingface/datasets/issues/5265/events
https://github.com/huggingface/datasets/issues/5265
1,455,274,864
I_kwDODunzps5Wvbtw
5,265
Get an IterableDataset from a map-style Dataset
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False}
[{'id': 1935892871, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODcx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/enhancement', 'name': 'enhancement', 'color': 'a2eeef', 'default': True, 'description': 'New feature or request'}, {'id': 3287858981, 'node_id': 'MDU6TGFiZWwzMjg3ODU4OTgx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/streaming', 'name': 'streaming', 'color': 'fef2c0', 'default': False, 'description': ''}]
open
false
No information
[]
No information
1
2022-11-18T14:54:40Z
2022-11-21T15:25:32Z
No information
MEMBER
No information
No information
No information
This is useful to leverage iterable datasets specific features like: - fast approximate shuffling - lazy map, filter etc. Iterating over the resulting iterable dataset should be at least as fast at iterating over the map-style dataset. Here are some ideas regarding the API: ```python # 1. # - consistency with load_dataset(..., streaming=True) # - gives intuition that map/filter/etc. are done on-the-fly ids = ds.stream() # 2. # - more explicit on the output type # - but maybe sounds like a conversion tool rather than a step in a processing pipeline ids = ds.as_iterable_dataset() ```
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5265/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5265/timeline
No information
No information
155
https://api.github.com/repos/huggingface/datasets/issues/5264
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5264/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5264/comments
https://api.github.com/repos/huggingface/datasets/issues/5264/events
https://github.com/huggingface/datasets/issues/5264
1,455,252,906
I_kwDODunzps5WvWWq
5,264
`datasets` can't read a Parquet file in Python 3.9.13
{'login': 'loubnabnl', 'id': 44069155, 'node_id': 'MDQ6VXNlcjQ0MDY5MTU1', 'avatar_url': 'https://avatars.githubusercontent.com/u/44069155?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/loubnabnl', 'html_url': 'https://github.com/loubnabnl', 'followers_url': 'https://api.github.com/users/loubnabnl/followers', 'following_url': 'https://api.github.com/users/loubnabnl/following{/other_user}', 'gists_url': 'https://api.github.com/users/loubnabnl/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/loubnabnl/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/loubnabnl/subscriptions', 'organizations_url': 'https://api.github.com/users/loubnabnl/orgs', 'repos_url': 'https://api.github.com/users/loubnabnl/repos', 'events_url': 'https://api.github.com/users/loubnabnl/events{/privacy}', 'received_events_url': 'https://api.github.com/users/loubnabnl/received_events', 'type': 'User', 'site_admin': False}
[{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}]
closed
false
No information
[]
No information
15
2022-11-18T14:44:01Z
2022-11-22T11:18:08Z
2022-11-22T11:18:08Z
NONE
No information
No information
No information
### Describe the bug I have an error when trying to load this [dataset](https://huggingface.co/datasets/bigcode/the-stack-dedup-pjj) (it's private but I can add you to the bigcode org). `datasets` can't read one of the parquet files in the Java subset ```python from datasets import load_dataset ds = load_dataset("bigcode/the-stack-dedup-pjj", data_dir="data/java", split="train", revision="v1.1.a1", use_auth_token=True) ```` ``` File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file. ``` It seems to be an issue with new Python versions, Because it works in these two environements: ``` - `datasets` version: 2.6.1 - Platform: Linux-5.4.0-131-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 9.0.0 - Pandas version: 1.3.4 ``` ``` - `datasets` version: 2.6.1 - Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-debian-10.13 - Python version: 3.7.12 - PyArrow version: 9.0.0 - Pandas version: 1.3.4 ``` But not in this: ``` - `datasets` version: 2.6.1 - Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-glibc2.28 - Python version: 3.9.13 - PyArrow version: 9.0.0 - Pandas version: 1.3.4 ``` ### Steps to reproduce the bug Load the dataset in python 3.9.13 ### Expected behavior Load the dataset without the pyarrow error. ### Environment info ``` - `datasets` version: 2.6.1 - Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-glibc2.28 - Python version: 3.9.13 - PyArrow version: 9.0.0 - Pandas version: 1.3.4 ```
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5264/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5264/timeline
No information
completed
156
https://api.github.com/repos/huggingface/datasets/issues/5263
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5263/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5263/comments
https://api.github.com/repos/huggingface/datasets/issues/5263/events
https://github.com/huggingface/datasets/issues/5263
1,455,252,626
I_kwDODunzps5WvWSS
5,263
Save a dataset in a determined number of shards
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False}
[{'id': 1935892871, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODcx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/enhancement', 'name': 'enhancement', 'color': 'a2eeef', 'default': True, 'description': 'New feature or request'}]
open
false
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False}
[{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False}]
No information
0
2022-11-18T14:43:54Z
2022-11-18T14:55:26Z
No information
MEMBER
No information
No information
No information
This is useful to distribute the shards to training nodes. This can be implemented in `save_to_disk` and can also leverage multiprocessing to speed up the process
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5263/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5263/timeline
No information
No information
157
https://api.github.com/repos/huggingface/datasets/issues/5262
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5262/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5262/comments
https://api.github.com/repos/huggingface/datasets/issues/5262/events
https://github.com/huggingface/datasets/issues/5262
1,455,171,100
I_kwDODunzps5WvCYc
5,262
AttributeError: 'Value' object has no attribute 'names'
{'login': 'emnaboughariou', 'id': 102913847, 'node_id': 'U_kgDOBiJXNw', 'avatar_url': 'https://avatars.githubusercontent.com/u/102913847?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/emnaboughariou', 'html_url': 'https://github.com/emnaboughariou', 'followers_url': 'https://api.github.com/users/emnaboughariou/followers', 'following_url': 'https://api.github.com/users/emnaboughariou/following{/other_user}', 'gists_url': 'https://api.github.com/users/emnaboughariou/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/emnaboughariou/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/emnaboughariou/subscriptions', 'organizations_url': 'https://api.github.com/users/emnaboughariou/orgs', 'repos_url': 'https://api.github.com/users/emnaboughariou/repos', 'events_url': 'https://api.github.com/users/emnaboughariou/events{/privacy}', 'received_events_url': 'https://api.github.com/users/emnaboughariou/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
2
2022-11-18T13:58:42Z
2022-11-22T10:09:24Z
2022-11-22T10:09:23Z
NONE
No information
No information
No information
Hello I'm trying to build a model for custom token classification I already followed the token classification course on huggingface while adapting the code to my work, this message occures : 'Value' object has no attribute 'names' Here's my code: `raw_datasets` generates DatasetDict({ train: Dataset({ features: ['isDisf', 'pos', 'tokens', 'id'], num_rows: 14 }) }) `raw_datasets["train"][3]["isDisf"]` generates ['B_RM', 'I_RM', 'I_RM', 'B_RP', 'I_RP', 'O', 'O'] `dis_feature = raw_datasets["train"].features["isDisf"] dis_feature` generates Sequence(feature=Value(dtype='string', id=None), length=-1, id=None) and `label_names = dis_feature.feature.names label_names` generates AttributeError Traceback (most recent call last) [<ipython-input-28-972fd54a869a>](https://localhost:8080/#) in <module> ----> 1 label_names = dis_feature.feature.names 2 label_names AttributeError: 'Value' object has AttributeError: 'Value' object has no attribute 'names' Thank you for your help
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5262/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5262/timeline
No information
completed
158
https://api.github.com/repos/huggingface/datasets/issues/5261
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5261/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5261/comments
https://api.github.com/repos/huggingface/datasets/issues/5261/events
https://github.com/huggingface/datasets/issues/5261
1,454,647,861
I_kwDODunzps5WtCo1
5,261
Add PubTables-1M
{'login': 'NielsRogge', 'id': 48327001, 'node_id': 'MDQ6VXNlcjQ4MzI3MDAx', 'avatar_url': 'https://avatars.githubusercontent.com/u/48327001?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/NielsRogge', 'html_url': 'https://github.com/NielsRogge', 'followers_url': 'https://api.github.com/users/NielsRogge/followers', 'following_url': 'https://api.github.com/users/NielsRogge/following{/other_user}', 'gists_url': 'https://api.github.com/users/NielsRogge/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/NielsRogge/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/NielsRogge/subscriptions', 'organizations_url': 'https://api.github.com/users/NielsRogge/orgs', 'repos_url': 'https://api.github.com/users/NielsRogge/repos', 'events_url': 'https://api.github.com/users/NielsRogge/events{/privacy}', 'received_events_url': 'https://api.github.com/users/NielsRogge/received_events', 'type': 'User', 'site_admin': False}
[{'id': 2067376369, 'node_id': 'MDU6TGFiZWwyMDY3Mzc2MzY5', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/dataset%20request', 'name': 'dataset request', 'color': 'e99695', 'default': False, 'description': 'Requesting to add a new dataset'}]
open
false
No information
[]
No information
1
2022-11-18T07:56:36Z
2022-11-18T08:02:18Z
No information
CONTRIBUTOR
No information
No information
No information
### Name PubTables-1M ### Paper https://openaccess.thecvf.com/content/CVPR2022/html/Smock_PubTables-1M_Towards_Comprehensive_Table_Extraction_From_Unstructured_Documents_CVPR_2022_paper.html ### Data https://github.com/microsoft/table-transformer ### Motivation Table Transformer is now available in πŸ€— Transformer, and it was trained on PubTables-1M. It's a large dataset for table extraction and structure recognition in unstructured documents.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5261/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5261/timeline
No information
No information
159
https://api.github.com/repos/huggingface/datasets/issues/5260
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5260/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5260/comments
https://api.github.com/repos/huggingface/datasets/issues/5260/events
https://github.com/huggingface/datasets/issues/5260
1,453,921,697
I_kwDODunzps5WqRWh
5,260
consumer-finance-complaints dataset not loading
{'login': 'adiprasad', 'id': 8098496, 'node_id': 'MDQ6VXNlcjgwOTg0OTY=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8098496?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/adiprasad', 'html_url': 'https://github.com/adiprasad', 'followers_url': 'https://api.github.com/users/adiprasad/followers', 'following_url': 'https://api.github.com/users/adiprasad/following{/other_user}', 'gists_url': 'https://api.github.com/users/adiprasad/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/adiprasad/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/adiprasad/subscriptions', 'organizations_url': 'https://api.github.com/users/adiprasad/orgs', 'repos_url': 'https://api.github.com/users/adiprasad/repos', 'events_url': 'https://api.github.com/users/adiprasad/events{/privacy}', 'received_events_url': 'https://api.github.com/users/adiprasad/received_events', 'type': 'User', 'site_admin': False}
[]
open
false
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}
[{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}]
No information
3
2022-11-17T20:10:26Z
2022-11-18T10:16:53Z
No information
NONE
No information
No information
No information
### Describe the bug Error during dataset loading ### Steps to reproduce the bug ``` >>> import datasets >>> cf_raw = datasets.load_dataset("consumer-finance-complaints") Downloading builder script: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 8.42k/8.42k [00:00<00:00, 3.33MB/s] Downloading metadata: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5.60k/5.60k [00:00<00:00, 2.90MB/s] Downloading readme: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 16.6k/16.6k [00:00<00:00, 510kB/s] Downloading and preparing dataset consumer-finance-complaints/default to /root/.cache/huggingface/datasets/consumer-finance-complaints/default/0.0.0/30e483d37fb4b25bb98cad1bfd2dc48f6ed6d1f3371eb4568c625a61d1a79b69... Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 511M/511M [00:04<00:00, 103MB/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/skunk-pod-storage-lee-2emartie-40ibm-2ecom-pvc/anaconda3/envs/datasets/lib/python3.8/site-packages/datasets/load.py", line 1741, in load_dataset builder_instance.download_and_prepare( File "/skunk-pod-storage-lee-2emartie-40ibm-2ecom-pvc/anaconda3/envs/datasets/lib/python3.8/site-packages/datasets/builder.py", line 822, in download_and_prepare self._download_and_prepare( File "/skunk-pod-storage-lee-2emartie-40ibm-2ecom-pvc/anaconda3/envs/datasets/lib/python3.8/site-packages/datasets/builder.py", line 1555, in _download_and_prepare super()._download_and_prepare( File "/skunk-pod-storage-lee-2emartie-40ibm-2ecom-pvc/anaconda3/envs/datasets/lib/python3.8/site-packages/datasets/builder.py", line 931, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/skunk-pod-storage-lee-2emartie-40ibm-2ecom-pvc/anaconda3/envs/datasets/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 74, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=1605177353, num_examples=2455765, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=2043641693, num_examples=3079747, shard_lengths=[721000, 656000, 788000, 846000, 68747], dataset_name='consumer-finance-complaints')}] ``` ### Expected behavior dataset should load ### Environment info >>> datasets.__version__ '2.7.0' Python 3.8.10 "Ubuntu 20.04.4 LTS"
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5260/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5260/timeline
No information
No information
160
https://api.github.com/repos/huggingface/datasets/issues/5259
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5259/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5259/comments
https://api.github.com/repos/huggingface/datasets/issues/5259/events
https://github.com/huggingface/datasets/issues/5259
1,453,555,923
I_kwDODunzps5Wo4DT
5,259
datasets 2.7 introduces sharding error
{'login': 'DCNemesis', 'id': 3616964, 'node_id': 'MDQ6VXNlcjM2MTY5NjQ=', 'avatar_url': 'https://avatars.githubusercontent.com/u/3616964?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/DCNemesis', 'html_url': 'https://github.com/DCNemesis', 'followers_url': 'https://api.github.com/users/DCNemesis/followers', 'following_url': 'https://api.github.com/users/DCNemesis/following{/other_user}', 'gists_url': 'https://api.github.com/users/DCNemesis/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/DCNemesis/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/DCNemesis/subscriptions', 'organizations_url': 'https://api.github.com/users/DCNemesis/orgs', 'repos_url': 'https://api.github.com/users/DCNemesis/repos', 'events_url': 'https://api.github.com/users/DCNemesis/events{/privacy}', 'received_events_url': 'https://api.github.com/users/DCNemesis/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
3
2022-11-17T15:36:52Z
2022-11-18T12:52:05Z
2022-11-18T12:52:05Z
NONE
No information
No information
No information
### Describe the bug dataset fails to load with runtime error `RuntimeError: Sharding is ambiguous for this dataset: we found several data sources lists of different lengths, and we don't know over which list we should parallelize: - key audio_files has length 46 - key data has length 0 To fix this, check the 'gen_kwargs' and make sure to use lists only for data sources, and use tuples otherwise. In the end there should only be one single list, or several lists with the same length.` ### Steps to reproduce the bug With datasets[audio] 2.7 loaded, and logged into hugging face, `data = datasets.load_dataset('sil-ai/bloom-speech', 'bis', use_auth_token=True)` creates the error. Full stack trace: ```--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) [<ipython-input-7-8cb9ca0f79f0>](https://localhost:8080/#) in <module> ----> 1 data = datasets.load_dataset('sil-ai/bloom-speech', 'bis', use_auth_token=True) 5 frames [/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, **config_kwargs) 1745 try_from_hf_gcs=try_from_hf_gcs, 1746 use_auth_token=use_auth_token, -> 1747 num_proc=num_proc, 1748 ) 1749 [/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 824 verify_infos=verify_infos, 825 **prepare_split_kwargs, --> 826 **download_and_prepare_kwargs, 827 ) 828 # Sync info [/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs) 1554 def _download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs): 1555 super()._download_and_prepare( -> 1556 dl_manager, verify_infos, check_duplicate_keys=verify_infos, **prepare_splits_kwargs 1557 ) 1558 [/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 911 try: 912 # Prepare split will record examples associated to the split --> 913 self._prepare_split(split_generator, **prepare_split_kwargs) 914 except OSError as e: 915 raise OSError( [/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size) 1362 fpath = path_join(self._output_dir, fname) 1363 -> 1364 num_input_shards = _number_of_shards_in_gen_kwargs(split_generator.gen_kwargs) 1365 if num_input_shards <= 1 and num_proc is not None: 1366 logger.warning( [/usr/local/lib/python3.7/dist-packages/datasets/utils/sharding.py](https://localhost:8080/#) in _number_of_shards_in_gen_kwargs(gen_kwargs) 16 + "\n".join(f"\t- key {key} has length {length}" for key, length in lists_lengths.items()) 17 + "\nTo fix this, check the 'gen_kwargs' and make sure to use lists only for data sources, " ---> 18 + "and use tuples otherwise. In the end there should only be one single list, or several lists with the same length." 19 ) 20 ) RuntimeError: Sharding is ambiguous for this dataset: we found several data sources lists of different lengths, and we don't know over which list we should parallelize: - key audio_files has length 46 - key data has length 0 To fix this, check the 'gen_kwargs' and make sure to use lists only for data sources, and use tuples otherwise. In the end there should only be one single list, or several lists with the same length.``` ### Expected behavior the dataset loads in datasets version 2.6.1 and should load with datasets 2.7 ### Environment info - `datasets` version: 2.7.0 - Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.15 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5259/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5259/timeline
No information
completed
161
https://api.github.com/repos/huggingface/datasets/issues/5258
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5258/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5258/comments
https://api.github.com/repos/huggingface/datasets/issues/5258/events
https://github.com/huggingface/datasets/issues/5258
1,453,516,636
I_kwDODunzps5Woudc
5,258
Restore order of split names in dataset_info for canonical datasets
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}
[{'id': 4564477500, 'node_id': 'LA_kwDODunzps8AAAABEBBmPA', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution', 'name': 'dataset contribution', 'color': '0e8a16', 'default': False, 'description': 'Contribution to a dataset script'}]
closed
false
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}
[{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}]
No information
3
2022-11-17T15:13:15Z
2022-11-19T06:51:38Z
2022-11-19T06:51:37Z
MEMBER
No information
No information
No information
After a bulk edit of canonical datasets to create the YAML `dataset_info` metadata, the split names were accidentally sorted alphabetically. See for example: - https://huggingface.co/datasets/bc2gm_corpus/commit/2384629484401ecf4bb77cd808816719c424e57c Note that this order is the one appearing in the preview of the datasets. I'm making a bulk edit to align the order of the splits appearing in the metadata info with the order appearing in the loading script.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5258/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5258/timeline
No information
completed
162
https://api.github.com/repos/huggingface/datasets/issues/5257
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5257/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5257/comments
https://api.github.com/repos/huggingface/datasets/issues/5257/events
https://github.com/huggingface/datasets/pull/5257
1,452,656,891
PR_kwDODunzps5DFENm
5,257
remove an unused statement
{'login': 'WrRan', 'id': 7569098, 'node_id': 'MDQ6VXNlcjc1NjkwOTg=', 'avatar_url': 'https://avatars.githubusercontent.com/u/7569098?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/WrRan', 'html_url': 'https://github.com/WrRan', 'followers_url': 'https://api.github.com/users/WrRan/followers', 'following_url': 'https://api.github.com/users/WrRan/following{/other_user}', 'gists_url': 'https://api.github.com/users/WrRan/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/WrRan/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/WrRan/subscriptions', 'organizations_url': 'https://api.github.com/users/WrRan/orgs', 'repos_url': 'https://api.github.com/users/WrRan/repos', 'events_url': 'https://api.github.com/users/WrRan/events{/privacy}', 'received_events_url': 'https://api.github.com/users/WrRan/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
0
2022-11-17T04:00:50Z
2022-11-18T11:04:08Z
2022-11-18T11:04:08Z
CONTRIBUTOR
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5257', 'html_url': 'https://github.com/huggingface/datasets/pull/5257', 'diff_url': 'https://github.com/huggingface/datasets/pull/5257.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5257.patch', 'merged_at': '2022-11-18T11:04:08Z'}
remove the unused statement: `input_pairs = list(zip())`
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5257/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5257/timeline
No information
No information
163
https://api.github.com/repos/huggingface/datasets/issues/5256
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5256/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5256/comments
https://api.github.com/repos/huggingface/datasets/issues/5256/events
https://github.com/huggingface/datasets/pull/5256
1,452,652,586
PR_kwDODunzps5DFDY0
5,256
fix wrong print
{'login': 'WrRan', 'id': 7569098, 'node_id': 'MDQ6VXNlcjc1NjkwOTg=', 'avatar_url': 'https://avatars.githubusercontent.com/u/7569098?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/WrRan', 'html_url': 'https://github.com/WrRan', 'followers_url': 'https://api.github.com/users/WrRan/followers', 'following_url': 'https://api.github.com/users/WrRan/following{/other_user}', 'gists_url': 'https://api.github.com/users/WrRan/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/WrRan/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/WrRan/subscriptions', 'organizations_url': 'https://api.github.com/users/WrRan/orgs', 'repos_url': 'https://api.github.com/users/WrRan/repos', 'events_url': 'https://api.github.com/users/WrRan/events{/privacy}', 'received_events_url': 'https://api.github.com/users/WrRan/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
0
2022-11-17T03:54:26Z
2022-11-18T11:05:32Z
2022-11-18T11:05:32Z
CONTRIBUTOR
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5256', 'html_url': 'https://github.com/huggingface/datasets/pull/5256', 'diff_url': 'https://github.com/huggingface/datasets/pull/5256.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5256.patch', 'merged_at': '2022-11-18T11:05:32Z'}
print `encoded_dataset.column_names` not `dataset.column_names`
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5256/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5256/timeline
No information
No information
164
https://api.github.com/repos/huggingface/datasets/issues/5255
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5255/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5255/comments
https://api.github.com/repos/huggingface/datasets/issues/5255/events
https://github.com/huggingface/datasets/issues/5255
1,452,631,517
I_kwDODunzps5WlWXd
5,255
Add a Depth Estimation dataset - DIODE / NYUDepth / KITTI
{'login': 'sayakpaul', 'id': 22957388, 'node_id': 'MDQ6VXNlcjIyOTU3Mzg4', 'avatar_url': 'https://avatars.githubusercontent.com/u/22957388?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/sayakpaul', 'html_url': 'https://github.com/sayakpaul', 'followers_url': 'https://api.github.com/users/sayakpaul/followers', 'following_url': 'https://api.github.com/users/sayakpaul/following{/other_user}', 'gists_url': 'https://api.github.com/users/sayakpaul/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/sayakpaul/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/sayakpaul/subscriptions', 'organizations_url': 'https://api.github.com/users/sayakpaul/orgs', 'repos_url': 'https://api.github.com/users/sayakpaul/repos', 'events_url': 'https://api.github.com/users/sayakpaul/events{/privacy}', 'received_events_url': 'https://api.github.com/users/sayakpaul/received_events', 'type': 'User', 'site_admin': False}
[{'id': 2067376369, 'node_id': 'MDU6TGFiZWwyMDY3Mzc2MzY5', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/dataset%20request', 'name': 'dataset request', 'color': 'e99695', 'default': False, 'description': 'Requesting to add a new dataset'}]
open
false
{'login': 'sayakpaul', 'id': 22957388, 'node_id': 'MDQ6VXNlcjIyOTU3Mzg4', 'avatar_url': 'https://avatars.githubusercontent.com/u/22957388?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/sayakpaul', 'html_url': 'https://github.com/sayakpaul', 'followers_url': 'https://api.github.com/users/sayakpaul/followers', 'following_url': 'https://api.github.com/users/sayakpaul/following{/other_user}', 'gists_url': 'https://api.github.com/users/sayakpaul/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/sayakpaul/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/sayakpaul/subscriptions', 'organizations_url': 'https://api.github.com/users/sayakpaul/orgs', 'repos_url': 'https://api.github.com/users/sayakpaul/repos', 'events_url': 'https://api.github.com/users/sayakpaul/events{/privacy}', 'received_events_url': 'https://api.github.com/users/sayakpaul/received_events', 'type': 'User', 'site_admin': False}
[{'login': 'sayakpaul', 'id': 22957388, 'node_id': 'MDQ6VXNlcjIyOTU3Mzg4', 'avatar_url': 'https://avatars.githubusercontent.com/u/22957388?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/sayakpaul', 'html_url': 'https://github.com/sayakpaul', 'followers_url': 'https://api.github.com/users/sayakpaul/followers', 'following_url': 'https://api.github.com/users/sayakpaul/following{/other_user}', 'gists_url': 'https://api.github.com/users/sayakpaul/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/sayakpaul/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/sayakpaul/subscriptions', 'organizations_url': 'https://api.github.com/users/sayakpaul/orgs', 'repos_url': 'https://api.github.com/users/sayakpaul/repos', 'events_url': 'https://api.github.com/users/sayakpaul/events{/privacy}', 'received_events_url': 'https://api.github.com/users/sayakpaul/received_events', 'type': 'User', 'site_admin': False}]
No information
14
2022-11-17T03:22:22Z
2022-11-30T14:07:32Z
No information
CONTRIBUTOR
No information
No information
No information
### Name NYUDepth ### Paper http://cs.nyu.edu/~silberman/papers/indoor_seg_support.pdf ### Data https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html ### Motivation Depth estimation is an important problem in computer vision. We have a couple of Depth Estimation models on Hub as well: * [GLPN](https://huggingface.co/docs/transformers/model_doc/glpn) * [DPT](https://huggingface.co/docs/transformers/model_doc/dpt) Would be nice to have a dataset for depth estimation. These datasets usually have three things: input image, depth map image, and depth mask (validity mask to indicate if a reading for a pixel is valid or not). Since we already have [semantic segmentation datasets on the Hub](https://huggingface.co/datasets?task_categories=task_categories:image-segmentation&sort=downloads), I don't think we need any extended utilities to support this addition. Having this dataset would also allow us to author data preprocessing guides for depth estimation, particularly like the ones we have for other tasks ([example](https://huggingface.co/docs/datasets/image_classification)). Ccing @osanseviero @nateraw @NielsRogge Happy to work on adding it.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5255/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5255/timeline
No information
No information
165
https://api.github.com/repos/huggingface/datasets/issues/5254
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5254/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5254/comments
https://api.github.com/repos/huggingface/datasets/issues/5254/events
https://github.com/huggingface/datasets/pull/5254
1,452,600,088
PR_kwDODunzps5DE47u
5,254
typo
{'login': 'WrRan', 'id': 7569098, 'node_id': 'MDQ6VXNlcjc1NjkwOTg=', 'avatar_url': 'https://avatars.githubusercontent.com/u/7569098?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/WrRan', 'html_url': 'https://github.com/WrRan', 'followers_url': 'https://api.github.com/users/WrRan/followers', 'following_url': 'https://api.github.com/users/WrRan/following{/other_user}', 'gists_url': 'https://api.github.com/users/WrRan/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/WrRan/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/WrRan/subscriptions', 'organizations_url': 'https://api.github.com/users/WrRan/orgs', 'repos_url': 'https://api.github.com/users/WrRan/repos', 'events_url': 'https://api.github.com/users/WrRan/events{/privacy}', 'received_events_url': 'https://api.github.com/users/WrRan/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
0
2022-11-17T02:39:57Z
2022-11-18T10:53:45Z
2022-11-18T10:53:45Z
CONTRIBUTOR
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5254', 'html_url': 'https://github.com/huggingface/datasets/pull/5254', 'diff_url': 'https://github.com/huggingface/datasets/pull/5254.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5254.patch', 'merged_at': '2022-11-18T10:53:45Z'}
No information
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5254/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5254/timeline
No information
No information
166
https://api.github.com/repos/huggingface/datasets/issues/5253
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5253/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5253/comments
https://api.github.com/repos/huggingface/datasets/issues/5253/events
https://github.com/huggingface/datasets/pull/5253
1,452,588,206
PR_kwDODunzps5DE2io
5,253
typo
{'login': 'WrRan', 'id': 7569098, 'node_id': 'MDQ6VXNlcjc1NjkwOTg=', 'avatar_url': 'https://avatars.githubusercontent.com/u/7569098?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/WrRan', 'html_url': 'https://github.com/WrRan', 'followers_url': 'https://api.github.com/users/WrRan/followers', 'following_url': 'https://api.github.com/users/WrRan/following{/other_user}', 'gists_url': 'https://api.github.com/users/WrRan/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/WrRan/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/WrRan/subscriptions', 'organizations_url': 'https://api.github.com/users/WrRan/orgs', 'repos_url': 'https://api.github.com/users/WrRan/repos', 'events_url': 'https://api.github.com/users/WrRan/events{/privacy}', 'received_events_url': 'https://api.github.com/users/WrRan/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
0
2022-11-17T02:22:58Z
2022-11-18T10:53:11Z
2022-11-18T10:53:10Z
CONTRIBUTOR
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5253', 'html_url': 'https://github.com/huggingface/datasets/pull/5253', 'diff_url': 'https://github.com/huggingface/datasets/pull/5253.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5253.patch', 'merged_at': '2022-11-18T10:53:10Z'}
No information
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5253/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5253/timeline
No information
No information
167
https://api.github.com/repos/huggingface/datasets/issues/5252
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5252/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5252/comments
https://api.github.com/repos/huggingface/datasets/issues/5252/events
https://github.com/huggingface/datasets/pull/5252
1,451,765,838
PR_kwDODunzps5DCI1U
5,252
Support for decoding Image/Audio types in map when format type is not default one
{'login': 'mariosasko', 'id': 47462742, 'node_id': 'MDQ6VXNlcjQ3NDYyNzQy', 'avatar_url': 'https://avatars.githubusercontent.com/u/47462742?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariosasko', 'html_url': 'https://github.com/mariosasko', 'followers_url': 'https://api.github.com/users/mariosasko/followers', 'following_url': 'https://api.github.com/users/mariosasko/following{/other_user}', 'gists_url': 'https://api.github.com/users/mariosasko/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/mariosasko/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/mariosasko/subscriptions', 'organizations_url': 'https://api.github.com/users/mariosasko/orgs', 'repos_url': 'https://api.github.com/users/mariosasko/repos', 'events_url': 'https://api.github.com/users/mariosasko/events{/privacy}', 'received_events_url': 'https://api.github.com/users/mariosasko/received_events', 'type': 'User', 'site_admin': False}
[]
open
false
No information
[]
No information
2
2022-11-16T15:02:13Z
2022-11-30T18:46:37Z
No information
CONTRIBUTOR
No information
True
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5252', 'html_url': 'https://github.com/huggingface/datasets/pull/5252', 'diff_url': 'https://github.com/huggingface/datasets/pull/5252.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5252.patch', 'merged_at': None}
Add support for decoding (lazily) the `Image`/`Audio` types in `map` for the formats (Numpy, TF, Jax, PyTorch) other than the default one (Python). Additional improvements: * make `Dataset`'s "iter" API cleaner by removing `_iter` and replacing `_iter_batches` with `iter(batch_size)` (also implemented for `IterableDataset`) * iterate over arrow tables in `map` to avoid `_getitem` calls, which are much slower than `__iter__`/`iter(batch_size)`, when the `format_type` is not Python * fix `_iter_batches` (now named `iter`) when `drop_last_batch=True` and `pyarrow<=8.0.0` is installed TODO: * [ ] update the `iter` benchmark in the docs (the `BeamBuilder` cannot load the preprocessed datasets from our bucket, so wait for this to be fixed (cc @lhoestq))
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5252/reactions', 'total_count': 1, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 1, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5252/timeline
No information
No information
168
https://api.github.com/repos/huggingface/datasets/issues/5251
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5251/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5251/comments
https://api.github.com/repos/huggingface/datasets/issues/5251/events
https://github.com/huggingface/datasets/issues/5251
1,451,761,321
I_kwDODunzps5WiB6p
5,251
Docs are not generated after latest release
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}
[{'id': 4296013012, 'node_id': 'LA_kwDODunzps8AAAABAA_01A', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/maintenance', 'name': 'maintenance', 'color': 'd4c5f9', 'default': False, 'description': 'Maintenance tasks'}]
closed
false
No information
[]
No information
8
2022-11-16T14:59:31Z
2022-11-22T16:27:50Z
2022-11-22T16:27:50Z
MEMBER
No information
No information
No information
After the latest `datasets` release version 0.7.0, the docs were not generated. As we have changed the release procedure (so that now we do not push directly to main branch), maybe we should also change the corresponding GitHub action: https://github.com/huggingface/datasets/blob/edf1902f954c5568daadebcd8754bdad44b02a85/.github/workflows/build_documentation.yml#L3-L8 Related to: - #5250 CC: @mishig25
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5251/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5251/timeline
No information
completed
169
https://api.github.com/repos/huggingface/datasets/issues/5250
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5250/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5250/comments
https://api.github.com/repos/huggingface/datasets/issues/5250/events
https://github.com/huggingface/datasets/pull/5250
1,451,720,030
PR_kwDODunzps5DB-1y
5,250
Change release procedure to use only pull requests
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
7
2022-11-16T14:35:32Z
2022-11-22T16:30:58Z
2022-11-22T16:27:48Z
MEMBER
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5250', 'html_url': 'https://github.com/huggingface/datasets/pull/5250', 'diff_url': 'https://github.com/huggingface/datasets/pull/5250.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5250.patch', 'merged_at': '2022-11-22T16:27:48Z'}
This PR changes the release procedure so that: - it only make changes to main branch via pull requests - it is no longer necessary to directly commit/push to main branch Close #5251.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5250/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5250/timeline
No information
No information
170
https://api.github.com/repos/huggingface/datasets/issues/5249
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5249/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5249/comments
https://api.github.com/repos/huggingface/datasets/issues/5249/events
https://github.com/huggingface/datasets/issues/5249
1,451,692,247
I_kwDODunzps5WhxDX
5,249
Protect the main branch from inadvertent direct pushes
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}
[{'id': 4296013012, 'node_id': 'LA_kwDODunzps8AAAABAA_01A', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/maintenance', 'name': 'maintenance', 'color': 'd4c5f9', 'default': False, 'description': 'Maintenance tasks'}]
open
false
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}
[{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}]
No information
0
2022-11-16T14:19:03Z
2022-11-16T14:36:14Z
No information
MEMBER
No information
No information
No information
We have decided to implement a protection mechanism in this repository, so that nobody (not even administrators) can inadvertently push accidentally directly to the main branch. See context here: - d7c942228b8dcf4de64b00a3053dce59b335f618 To do: - [x] Protect main branch - Settings > Branches > Branch protection rules > main > Edit - [x] Check: Do not allow bypassing the above settings - The above settings will apply to administrators and custom roles with the "bypass branch protections" permission. - [x] Additionally, uncheck: Require approvals [under "Require a pull request before merging", which was already checked] - Before, we could exceptionally merge a non-approved PR, using Administrator bypass - Now that Administrator bypass is no longer possible, we would always need an approval to be able to merge; and pull request authors cannot approve their own pull requests. This could be an inconvenient in some exceptional circumstances when an urgent fix is needed - Nevertheless, although it is no longer enforced, it is strongly recommended to merge PRs only if they have at least one approval - [ ] #5250 - So that direct pushes to main branch are no longer necessary
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5249/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5249/timeline
No information
No information
171
https://api.github.com/repos/huggingface/datasets/issues/5248
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5248/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5248/comments
https://api.github.com/repos/huggingface/datasets/issues/5248/events
https://github.com/huggingface/datasets/pull/5248
1,451,338,676
PR_kwDODunzps5DAqwt
5,248
Complete doc migration
{'login': 'mishig25', 'id': 11827707, 'node_id': 'MDQ6VXNlcjExODI3NzA3', 'avatar_url': 'https://avatars.githubusercontent.com/u/11827707?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mishig25', 'html_url': 'https://github.com/mishig25', 'followers_url': 'https://api.github.com/users/mishig25/followers', 'following_url': 'https://api.github.com/users/mishig25/following{/other_user}', 'gists_url': 'https://api.github.com/users/mishig25/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/mishig25/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/mishig25/subscriptions', 'organizations_url': 'https://api.github.com/users/mishig25/orgs', 'repos_url': 'https://api.github.com/users/mishig25/repos', 'events_url': 'https://api.github.com/users/mishig25/events{/privacy}', 'received_events_url': 'https://api.github.com/users/mishig25/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
2
2022-11-16T10:41:04Z
2022-11-16T15:06:50Z
2022-11-16T10:41:10Z
CONTRIBUTOR
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5248', 'html_url': 'https://github.com/huggingface/datasets/pull/5248', 'diff_url': 'https://github.com/huggingface/datasets/pull/5248.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5248.patch', 'merged_at': '2022-11-16T10:41:10Z'}
Reverts huggingface/datasets#5214 Everything is handled on the doc-builder side now 😊
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5248/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5248/timeline
No information
No information
172
https://api.github.com/repos/huggingface/datasets/issues/5247
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5247/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5247/comments
https://api.github.com/repos/huggingface/datasets/issues/5247/events
https://github.com/huggingface/datasets/pull/5247
1,451,297,749
PR_kwDODunzps5DAhto
5,247
Set dev version
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
1
2022-11-16T10:17:31Z
2022-11-16T10:22:20Z
2022-11-16T10:17:50Z
MEMBER
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5247', 'html_url': 'https://github.com/huggingface/datasets/pull/5247', 'diff_url': 'https://github.com/huggingface/datasets/pull/5247.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5247.patch', 'merged_at': '2022-11-16T10:17:50Z'}
No information
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5247/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5247/timeline
No information
No information
173
https://api.github.com/repos/huggingface/datasets/issues/5246
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5246/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5246/comments
https://api.github.com/repos/huggingface/datasets/issues/5246/events
https://github.com/huggingface/datasets/pull/5246
1,451,226,055
PR_kwDODunzps5DASLI
5,246
Release: 2.7.0
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/albertvillanova/followers', 'following_url': 'https://api.github.com/users/albertvillanova/following{/other_user}', 'gists_url': 'https://api.github.com/users/albertvillanova/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/albertvillanova/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/albertvillanova/subscriptions', 'organizations_url': 'https://api.github.com/users/albertvillanova/orgs', 'repos_url': 'https://api.github.com/users/albertvillanova/repos', 'events_url': 'https://api.github.com/users/albertvillanova/events{/privacy}', 'received_events_url': 'https://api.github.com/users/albertvillanova/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
1
2022-11-16T09:32:44Z
2022-11-16T09:39:42Z
2022-11-16T09:37:03Z
MEMBER
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5246', 'html_url': 'https://github.com/huggingface/datasets/pull/5246', 'diff_url': 'https://github.com/huggingface/datasets/pull/5246.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5246.patch', 'merged_at': '2022-11-16T09:37:03Z'}
No information
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5246/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5246/timeline
No information
No information
174
https://api.github.com/repos/huggingface/datasets/issues/5245
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5245/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5245/comments
https://api.github.com/repos/huggingface/datasets/issues/5245/events
https://github.com/huggingface/datasets/issues/5245
1,450,376,433
I_kwDODunzps5Wcvzx
5,245
Unable to rename columns in streaming dataset
{'login': 'peregilk', 'id': 9079808, 'node_id': 'MDQ6VXNlcjkwNzk4MDg=', 'avatar_url': 'https://avatars.githubusercontent.com/u/9079808?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/peregilk', 'html_url': 'https://github.com/peregilk', 'followers_url': 'https://api.github.com/users/peregilk/followers', 'following_url': 'https://api.github.com/users/peregilk/following{/other_user}', 'gists_url': 'https://api.github.com/users/peregilk/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/peregilk/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/peregilk/subscriptions', 'organizations_url': 'https://api.github.com/users/peregilk/orgs', 'repos_url': 'https://api.github.com/users/peregilk/repos', 'events_url': 'https://api.github.com/users/peregilk/events{/privacy}', 'received_events_url': 'https://api.github.com/users/peregilk/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
{'login': 'alvarobartt', 'id': 36760800, 'node_id': 'MDQ6VXNlcjM2NzYwODAw', 'avatar_url': 'https://avatars.githubusercontent.com/u/36760800?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/alvarobartt', 'html_url': 'https://github.com/alvarobartt', 'followers_url': 'https://api.github.com/users/alvarobartt/followers', 'following_url': 'https://api.github.com/users/alvarobartt/following{/other_user}', 'gists_url': 'https://api.github.com/users/alvarobartt/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/alvarobartt/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/alvarobartt/subscriptions', 'organizations_url': 'https://api.github.com/users/alvarobartt/orgs', 'repos_url': 'https://api.github.com/users/alvarobartt/repos', 'events_url': 'https://api.github.com/users/alvarobartt/events{/privacy}', 'received_events_url': 'https://api.github.com/users/alvarobartt/received_events', 'type': 'User', 'site_admin': False}
[{'login': 'alvarobartt', 'id': 36760800, 'node_id': 'MDQ6VXNlcjM2NzYwODAw', 'avatar_url': 'https://avatars.githubusercontent.com/u/36760800?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/alvarobartt', 'html_url': 'https://github.com/alvarobartt', 'followers_url': 'https://api.github.com/users/alvarobartt/followers', 'following_url': 'https://api.github.com/users/alvarobartt/following{/other_user}', 'gists_url': 'https://api.github.com/users/alvarobartt/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/alvarobartt/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/alvarobartt/subscriptions', 'organizations_url': 'https://api.github.com/users/alvarobartt/orgs', 'repos_url': 'https://api.github.com/users/alvarobartt/repos', 'events_url': 'https://api.github.com/users/alvarobartt/events{/privacy}', 'received_events_url': 'https://api.github.com/users/alvarobartt/received_events', 'type': 'User', 'site_admin': False}]
No information
7
2022-11-15T21:04:41Z
2022-11-28T12:53:24Z
2022-11-28T12:53:24Z
NONE
No information
No information
No information
### Describe the bug Trying to rename column in a streaming datasets, destroys the features object. ### Steps to reproduce the bug The following code illustrates the error: ``` from datasets import load_dataset dataset = load_dataset('mc4', 'en', streaming=True, split='train') dataset.info.features # {'text': Value(dtype='string', id=None), 'timestamp': Value(dtype='string', id=None), 'url': Value(dtype='string', id=None)} dataset = dataset.rename_column("text", "content") dataset.info.features # This returned object is now None! ``` ### Expected behavior This should just alter the renamed column. ### Environment info datasets 2.6.1
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5245/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5245/timeline
No information
completed
175
https://api.github.com/repos/huggingface/datasets/issues/5244
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5244/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5244/comments
https://api.github.com/repos/huggingface/datasets/issues/5244/events
https://github.com/huggingface/datasets/issues/5244
1,450,019,225
I_kwDODunzps5WbYmZ
5,244
Allow dataset streaming from private a private source when loading a dataset with a dataset loading script
{'login': 'Hubert-Bonisseur', 'id': 48770768, 'node_id': 'MDQ6VXNlcjQ4NzcwNzY4', 'avatar_url': 'https://avatars.githubusercontent.com/u/48770768?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/Hubert-Bonisseur', 'html_url': 'https://github.com/Hubert-Bonisseur', 'followers_url': 'https://api.github.com/users/Hubert-Bonisseur/followers', 'following_url': 'https://api.github.com/users/Hubert-Bonisseur/following{/other_user}', 'gists_url': 'https://api.github.com/users/Hubert-Bonisseur/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/Hubert-Bonisseur/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/Hubert-Bonisseur/subscriptions', 'organizations_url': 'https://api.github.com/users/Hubert-Bonisseur/orgs', 'repos_url': 'https://api.github.com/users/Hubert-Bonisseur/repos', 'events_url': 'https://api.github.com/users/Hubert-Bonisseur/events{/privacy}', 'received_events_url': 'https://api.github.com/users/Hubert-Bonisseur/received_events', 'type': 'User', 'site_admin': False}
[{'id': 1935892871, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODcx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/enhancement', 'name': 'enhancement', 'color': 'a2eeef', 'default': True, 'description': 'New feature or request'}]
open
false
No information
[]
No information
5
2022-11-15T16:02:10Z
2022-11-23T14:02:30Z
No information
NONE
No information
No information
No information
### Feature request Add arguments to the function _get_authentication_headers_for_url_ like custom_endpoint and custom_token in order to add flexibility when downloading files from a private source. It should also be possible to provide these arguments from the dataset loading script, maybe giving them to the dl_manager ### Motivation It is possible to share a dataset hosted on another platform by writing a dataset loading script. It works perfectly for publicly available resources. For resources that require authentication, you can provide a [download_custom](https://huggingface.co/docs/datasets/package_reference/builder_classes#datasets.DownloadManager) method to the download_manager. Unfortunately, this function doesn't work with **dataset streaming**. A solution so as to allow dataset streaming from private sources would be a more flexible _get_authentication_headers_for_url_ function. ### Your contribution Would you be interested in this improvement ? If so I could provide a PR. I've got something working locally, but it's not very clean, I'd need some guidance regarding integration.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5244/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5244/timeline
No information
No information
176
https://api.github.com/repos/huggingface/datasets/issues/5243
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5243/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5243/comments
https://api.github.com/repos/huggingface/datasets/issues/5243/events
https://github.com/huggingface/datasets/issues/5243
1,449,523,962
I_kwDODunzps5WZfr6
5,243
Download only split data
{'login': 'capsabogdan', 'id': 48530104, 'node_id': 'MDQ6VXNlcjQ4NTMwMTA0', 'avatar_url': 'https://avatars.githubusercontent.com/u/48530104?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/capsabogdan', 'html_url': 'https://github.com/capsabogdan', 'followers_url': 'https://api.github.com/users/capsabogdan/followers', 'following_url': 'https://api.github.com/users/capsabogdan/following{/other_user}', 'gists_url': 'https://api.github.com/users/capsabogdan/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/capsabogdan/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/capsabogdan/subscriptions', 'organizations_url': 'https://api.github.com/users/capsabogdan/orgs', 'repos_url': 'https://api.github.com/users/capsabogdan/repos', 'events_url': 'https://api.github.com/users/capsabogdan/events{/privacy}', 'received_events_url': 'https://api.github.com/users/capsabogdan/received_events', 'type': 'User', 'site_admin': False}
[{'id': 1935892871, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODcx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/enhancement', 'name': 'enhancement', 'color': 'a2eeef', 'default': True, 'description': 'New feature or request'}]
open
false
No information
[]
No information
3
2022-11-15T10:15:54Z
2022-11-15T20:12:24Z
No information
NONE
No information
No information
No information
### Feature request Is it possible to download only the data that I am requesting and not the entire dataset? I run out of disk spaceas it seems to download the entire dataset, instead of only the part needed. common_voice["test"] = load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", cache_dir="cache/path...", use_auth_token=True, download_config=DownloadConfig(delete_extracted='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') ) ### Motivation efficiency improvement ### Your contribution n/a
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5243/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5243/timeline
No information
No information
177
https://api.github.com/repos/huggingface/datasets/issues/5242
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5242/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5242/comments
https://api.github.com/repos/huggingface/datasets/issues/5242/events
https://github.com/huggingface/datasets/issues/5242
1,449,069,382
I_kwDODunzps5WXwtG
5,242
Failed Data Processing upon upload with zip file full of images
{'login': 'scrambled2', 'id': 82735473, 'node_id': 'MDQ6VXNlcjgyNzM1NDcz', 'avatar_url': 'https://avatars.githubusercontent.com/u/82735473?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/scrambled2', 'html_url': 'https://github.com/scrambled2', 'followers_url': 'https://api.github.com/users/scrambled2/followers', 'following_url': 'https://api.github.com/users/scrambled2/following{/other_user}', 'gists_url': 'https://api.github.com/users/scrambled2/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/scrambled2/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/scrambled2/subscriptions', 'organizations_url': 'https://api.github.com/users/scrambled2/orgs', 'repos_url': 'https://api.github.com/users/scrambled2/repos', 'events_url': 'https://api.github.com/users/scrambled2/events{/privacy}', 'received_events_url': 'https://api.github.com/users/scrambled2/received_events', 'type': 'User', 'site_admin': False}
[]
open
false
No information
[]
No information
1
2022-11-15T02:47:52Z
2022-11-15T17:59:23Z
No information
NONE
No information
No information
No information
I went to autotrain and under image classification arrived where it was time to prepare my dataset. Screenshot below ![image](https://user-images.githubusercontent.com/82735473/201814099-3cc5ff8a-88dc-4f5f-8140-f19560641d83.png) I chose the method 2 option. I have a csv file with two columns. ~23,000 files. I uploaded this and chose the image_relpath, and target columns. The image uploader said that I could only upload 10,000 singular images at a time so the 2nd option was to zip the images up and upload a zip archive which I did. That all uploaded. Now I have the message below. It appears the zip archive does just uncompress on the Hugging Face end? What am I missing here? ![image](https://user-images.githubusercontent.com/82735473/201813838-b50dbbbc-34e8-4d73-9c07-12f9e41c62eb.png)
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5242/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5242/timeline
No information
No information
178
https://api.github.com/repos/huggingface/datasets/issues/5241
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5241/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5241/comments
https://api.github.com/repos/huggingface/datasets/issues/5241/events
https://github.com/huggingface/datasets/pull/5241
1,448,510,407
PR_kwDODunzps5C3MTG
5,241
Support hfh rc version
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
1
2022-11-14T18:05:47Z
2022-11-15T16:11:30Z
2022-11-15T16:09:31Z
MEMBER
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5241', 'html_url': 'https://github.com/huggingface/datasets/pull/5241', 'diff_url': 'https://github.com/huggingface/datasets/pull/5241.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5241.patch', 'merged_at': '2022-11-15T16:09:31Z'}
otherwise the code doesn't work for hfh 0.11.0rc0 following #5237
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5241/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5241/timeline
No information
No information
179
https://api.github.com/repos/huggingface/datasets/issues/5240
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5240/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5240/comments
https://api.github.com/repos/huggingface/datasets/issues/5240/events
https://github.com/huggingface/datasets/pull/5240
1,448,478,617
PR_kwDODunzps5C3Fe6
5,240
Cleaner error tracebacks for dataset script errors
{'login': 'mariosasko', 'id': 47462742, 'node_id': 'MDQ6VXNlcjQ3NDYyNzQy', 'avatar_url': 'https://avatars.githubusercontent.com/u/47462742?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariosasko', 'html_url': 'https://github.com/mariosasko', 'followers_url': 'https://api.github.com/users/mariosasko/followers', 'following_url': 'https://api.github.com/users/mariosasko/following{/other_user}', 'gists_url': 'https://api.github.com/users/mariosasko/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/mariosasko/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/mariosasko/subscriptions', 'organizations_url': 'https://api.github.com/users/mariosasko/orgs', 'repos_url': 'https://api.github.com/users/mariosasko/repos', 'events_url': 'https://api.github.com/users/mariosasko/events{/privacy}', 'received_events_url': 'https://api.github.com/users/mariosasko/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
2
2022-11-14T17:42:02Z
2022-11-15T18:26:48Z
2022-11-15T18:24:38Z
CONTRIBUTOR
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5240', 'html_url': 'https://github.com/huggingface/datasets/pull/5240', 'diff_url': 'https://github.com/huggingface/datasets/pull/5240.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5240.patch', 'merged_at': '2022-11-15T18:24:38Z'}
Make the traceback of the errors raised in `_generate_examples` cleaner for easier debugging. Additionally, initialize the `writer` in the for-loop to avoid the `ValueError` from `ArrowWriter.finalize` raised in the `finally` block when no examples are yielded before the `_generate_examples` error. <details> <summary> The full traceback of the "SQLAlchemy ImportError" error that gets printed with these changes: </summary> ```bash ImportError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/datasets/builder.py in _prepare_split_single(self, arg) 1759 _time = time.time() -> 1760 for _, table in generator: 1761 # Only initialize the writer when we have the first record (to avoid having to do the clean-up if an error occurs before that) 9 frames /usr/local/lib/python3.7/dist-packages/datasets/packaged_modules/sql/sql.py in _generate_tables(self) 112 sql_reader = pd.read_sql( --> 113 self.config.sql, self.config.con, chunksize=chunksize, **self.config.pd_read_sql_kwargs 114 ) /usr/local/lib/python3.7/dist-packages/pandas/io/sql.py in read_sql(sql, con, index_col, coerce_float, params, parse_dates, columns, chunksize) 598 """ --> 599 pandas_sql = pandasSQL_builder(con) 600 /usr/local/lib/python3.7/dist-packages/pandas/io/sql.py in pandasSQL_builder(con, schema, meta, is_cursor) 789 elif isinstance(con, str): --> 790 raise ImportError("Using URI string without sqlalchemy installed.") 791 else: ImportError: Using URI string without sqlalchemy installed. The above exception was the direct cause of the following exception: DatasetGenerationError Traceback (most recent call last) <ipython-input-4-5af11af4737b> in <module> ----> 1 ds = Dataset.from_sql('''SELECT * from states WHERE state=="New York";''', "sqlite:///us_covid_data.db") /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in from_sql(sql, con, features, cache_dir, keep_in_memory, **kwargs) 1152 cache_dir=cache_dir, 1153 keep_in_memory=keep_in_memory, -> 1154 **kwargs, 1155 ).read() 1156 /usr/local/lib/python3.7/dist-packages/datasets/io/sql.py in read(self) 47 # try_from_hf_gcs=try_from_hf_gcs, 48 base_path=base_path, ---> 49 use_auth_token=use_auth_token, 50 ) 51 /usr/local/lib/python3.7/dist-packages/datasets/builder.py in download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 825 verify_infos=verify_infos, 826 **prepare_split_kwargs, --> 827 **download_and_prepare_kwargs, 828 ) 829 # Sync info /usr/local/lib/python3.7/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 912 try: 913 # Prepare split will record examples associated to the split --> 914 self._prepare_split(split_generator, **prepare_split_kwargs) 915 except OSError as e: 916 raise OSError( /usr/local/lib/python3.7/dist-packages/datasets/builder.py in _prepare_split(self, split_generator, file_format, num_proc, max_shard_size) 1652 job_id = 0 1653 for job_id, done, content in self._prepare_split_single( -> 1654 {"gen_kwargs": gen_kwargs, "job_id": job_id, **_prepare_split_args} 1655 ): 1656 if done: /usr/local/lib/python3.7/dist-packages/datasets/builder.py in _prepare_split_single(self, arg) 1789 raise DatasetGenerationError( 1790 f"An error occured while generating the dataset" -> 1791 ) from e 1792 finally: 1793 yield job_id, False, num_examples_progress_update DatasetGenerationError: An error occurred while generating the dataset ``` </details> PS: I've also considered raising the error as follows: ```python tb = sys.exc_info()[2] raise DatasetGenerationError(f"An error occurred while generating the dataset: {type(e).__name__}: {e}").with_traceback(tb) from None # this raises the DatasetGenerationError with "e"'s traceback ``` But it seems like "from e" is now the [preferred](https://docs.python.org/3/library/exceptions.html#BaseException.with_traceback) way to chain exceptions. Fix https://github.com/huggingface/datasets/issues/5186 cc @nateraw
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5240/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5240/timeline
No information
No information
180
https://api.github.com/repos/huggingface/datasets/issues/5239
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5239/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5239/comments
https://api.github.com/repos/huggingface/datasets/issues/5239/events
https://github.com/huggingface/datasets/pull/5239
1,448,211,373
PR_kwDODunzps5C2L_P
5,239
Add num_proc to from_csv/generator/json/parquet/text
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False}
[]
open
false
No information
[]
No information
2
2022-11-14T14:53:00Z
2022-11-29T16:50:47Z
No information
MEMBER
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5239', 'html_url': 'https://github.com/huggingface/datasets/pull/5239', 'diff_url': 'https://github.com/huggingface/datasets/pull/5239.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5239.patch', 'merged_at': None}
Allow multiprocessing to from_* methods
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5239/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5239/timeline
No information
No information
181
https://api.github.com/repos/huggingface/datasets/issues/5238
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5238/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5238/comments
https://api.github.com/repos/huggingface/datasets/issues/5238/events
https://github.com/huggingface/datasets/pull/5238
1,448,211,251
PR_kwDODunzps5C2L9h
5,238
Make `Version` hashable
{'login': 'mariosasko', 'id': 47462742, 'node_id': 'MDQ6VXNlcjQ3NDYyNzQy', 'avatar_url': 'https://avatars.githubusercontent.com/u/47462742?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariosasko', 'html_url': 'https://github.com/mariosasko', 'followers_url': 'https://api.github.com/users/mariosasko/followers', 'following_url': 'https://api.github.com/users/mariosasko/following{/other_user}', 'gists_url': 'https://api.github.com/users/mariosasko/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/mariosasko/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/mariosasko/subscriptions', 'organizations_url': 'https://api.github.com/users/mariosasko/orgs', 'repos_url': 'https://api.github.com/users/mariosasko/repos', 'events_url': 'https://api.github.com/users/mariosasko/events{/privacy}', 'received_events_url': 'https://api.github.com/users/mariosasko/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
1
2022-11-14T14:52:55Z
2022-11-14T15:30:02Z
2022-11-14T15:27:35Z
CONTRIBUTOR
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5238', 'html_url': 'https://github.com/huggingface/datasets/pull/5238', 'diff_url': 'https://github.com/huggingface/datasets/pull/5238.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5238.patch', 'merged_at': '2022-11-14T15:27:35Z'}
Add `__hash__` to the `Version` class to make it hashable (and remove the unneeded methods), as `Version("0.0.0")` is the default value of `BuilderConfig.version` and the default fields of a dataclass need to be hashable in Python 3.11. Fix https://github.com/huggingface/datasets/issues/5230
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5238/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5238/timeline
No information
No information
182
https://api.github.com/repos/huggingface/datasets/issues/5237
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5237/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5237/comments
https://api.github.com/repos/huggingface/datasets/issues/5237/events
https://github.com/huggingface/datasets/pull/5237
1,448,202,491
PR_kwDODunzps5C2KGz
5,237
Encode path only for old versions of hfh
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
1
2022-11-14T14:46:57Z
2022-11-14T17:38:18Z
2022-11-14T17:35:59Z
MEMBER
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5237', 'html_url': 'https://github.com/huggingface/datasets/pull/5237', 'diff_url': 'https://github.com/huggingface/datasets/pull/5237.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5237.patch', 'merged_at': '2022-11-14T17:35:59Z'}
Next version of `huggingface-hub` 0.11 does encode the `path`, and we don't want to encode twice
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5237/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5237/timeline
No information
No information
183
https://api.github.com/repos/huggingface/datasets/issues/5236
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5236/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5236/comments
https://api.github.com/repos/huggingface/datasets/issues/5236/events
https://github.com/huggingface/datasets/pull/5236
1,448,190,801
PR_kwDODunzps5C2Hnj
5,236
Handle ArrowNotImplementedError caused by try_type being Image or Audio in cast
{'login': 'mariosasko', 'id': 47462742, 'node_id': 'MDQ6VXNlcjQ3NDYyNzQy', 'avatar_url': 'https://avatars.githubusercontent.com/u/47462742?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariosasko', 'html_url': 'https://github.com/mariosasko', 'followers_url': 'https://api.github.com/users/mariosasko/followers', 'following_url': 'https://api.github.com/users/mariosasko/following{/other_user}', 'gists_url': 'https://api.github.com/users/mariosasko/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/mariosasko/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/mariosasko/subscriptions', 'organizations_url': 'https://api.github.com/users/mariosasko/orgs', 'repos_url': 'https://api.github.com/users/mariosasko/repos', 'events_url': 'https://api.github.com/users/mariosasko/events{/privacy}', 'received_events_url': 'https://api.github.com/users/mariosasko/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
2
2022-11-14T14:38:59Z
2022-11-14T16:04:29Z
2022-11-14T16:01:48Z
CONTRIBUTOR
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5236', 'html_url': 'https://github.com/huggingface/datasets/pull/5236', 'diff_url': 'https://github.com/huggingface/datasets/pull/5236.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5236.patch', 'merged_at': '2022-11-14T16:01:48Z'}
Handle the `ArrowNotImplementedError` thrown when `try_type` is `Image` or `Audio` and the input array cannot be converted to their storage formats. Reproducer: ```python from datasets import Dataset from PIL import Image import requests ds = Dataset.from_dict({"image": [Image.open(requests.get("https://upload.wikimedia.org/wikipedia/commons/e/e9/Felis_silvestris_silvestris_small_gradual_decrease_of_quality.png", stream=True).raw)]}) ds.map(lambda x: {"image": True}) # ArrowNotImplementedError ``` PS: This could also be fixed by raising `TypeError` in `{Image, Audio}.cast_storage` for unsupported types instead of passing the array to `array_cast.`
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5236/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5236/timeline
No information
No information
184
https://api.github.com/repos/huggingface/datasets/issues/5235
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5235/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5235/comments
https://api.github.com/repos/huggingface/datasets/issues/5235/events
https://github.com/huggingface/datasets/pull/5235
1,448,052,660
PR_kwDODunzps5C1pjc
5,235
Pin `typer` version in tests to <0.5 to fix Windows CI
{'login': 'polinaeterna', 'id': 16348744, 'node_id': 'MDQ6VXNlcjE2MzQ4NzQ0', 'avatar_url': 'https://avatars.githubusercontent.com/u/16348744?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/polinaeterna', 'html_url': 'https://github.com/polinaeterna', 'followers_url': 'https://api.github.com/users/polinaeterna/followers', 'following_url': 'https://api.github.com/users/polinaeterna/following{/other_user}', 'gists_url': 'https://api.github.com/users/polinaeterna/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/polinaeterna/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/polinaeterna/subscriptions', 'organizations_url': 'https://api.github.com/users/polinaeterna/orgs', 'repos_url': 'https://api.github.com/users/polinaeterna/repos', 'events_url': 'https://api.github.com/users/polinaeterna/events{/privacy}', 'received_events_url': 'https://api.github.com/users/polinaeterna/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
0
2022-11-14T13:17:02Z
2022-11-14T15:43:01Z
2022-11-14T13:41:12Z
CONTRIBUTOR
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5235', 'html_url': 'https://github.com/huggingface/datasets/pull/5235', 'diff_url': 'https://github.com/huggingface/datasets/pull/5235.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5235.patch', 'merged_at': '2022-11-14T13:41:12Z'}
Otherwise `click` fails on Windows: ``` Traceback (most recent call last): File "C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\site-packages\spacy\__main__.py", line 4, in <module> setup_cli() File "C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\site-packages\spacy\cli\_util.py", line 71, in setup_cli command(prog_name=COMMAND) File "C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\site-packages\click\core.py", line 829, in __call__ return self.main(*args, **kwargs) File "C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\site-packages\typer\core.py", line 785, in main **extra, File "C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\site-packages\typer\core.py", line 190, in _main args = click.utils._expand_args(args) AttributeError: module 'click.utils' has no attribute '_expand_args' ``` See https://github.com/tiangolo/typer/issues/427
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5235/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5235/timeline
No information
No information
185
https://api.github.com/repos/huggingface/datasets/issues/5234
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5234/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5234/comments
https://api.github.com/repos/huggingface/datasets/issues/5234/events
https://github.com/huggingface/datasets/pull/5234
1,447,999,062
PR_kwDODunzps5C1diq
5,234
fix: dataset path should be absolute
{'login': 'vigsterkr', 'id': 30353, 'node_id': 'MDQ6VXNlcjMwMzUz', 'avatar_url': 'https://avatars.githubusercontent.com/u/30353?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/vigsterkr', 'html_url': 'https://github.com/vigsterkr', 'followers_url': 'https://api.github.com/users/vigsterkr/followers', 'following_url': 'https://api.github.com/users/vigsterkr/following{/other_user}', 'gists_url': 'https://api.github.com/users/vigsterkr/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/vigsterkr/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/vigsterkr/subscriptions', 'organizations_url': 'https://api.github.com/users/vigsterkr/orgs', 'repos_url': 'https://api.github.com/users/vigsterkr/repos', 'events_url': 'https://api.github.com/users/vigsterkr/events{/privacy}', 'received_events_url': 'https://api.github.com/users/vigsterkr/received_events', 'type': 'User', 'site_admin': False}
[]
open
false
No information
[]
No information
1
2022-11-14T12:47:40Z
2022-11-18T15:14:16Z
No information
NONE
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5234', 'html_url': 'https://github.com/huggingface/datasets/pull/5234', 'diff_url': 'https://github.com/huggingface/datasets/pull/5234.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5234.patch', 'merged_at': None}
cache_file_name depends on dataset's path. A simple way where this could cause a problem: ``` import os import datasets def add_prefix(example): example["text"] = "Review: " + example["text"] return example ds = datasets.load_from_disk("a/relative/path") os.chdir("/tmp") ds_1 = ds.map(add_prefix) ``` while it may feel that the `chdir` is quite constructed, there are many scenarios when the current working dir can/will change...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5234/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5234/timeline
No information
No information
186
https://api.github.com/repos/huggingface/datasets/issues/5233
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5233/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5233/comments
https://api.github.com/repos/huggingface/datasets/issues/5233/events
https://github.com/huggingface/datasets/pull/5233
1,447,906,868
PR_kwDODunzps5C1JVh
5,233
Fix shards in IterableDataset.from_generator
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'following_url': 'https://api.github.com/users/lhoestq/following{/other_user}', 'gists_url': 'https://api.github.com/users/lhoestq/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/lhoestq/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/lhoestq/subscriptions', 'organizations_url': 'https://api.github.com/users/lhoestq/orgs', 'repos_url': 'https://api.github.com/users/lhoestq/repos', 'events_url': 'https://api.github.com/users/lhoestq/events{/privacy}', 'received_events_url': 'https://api.github.com/users/lhoestq/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
1
2022-11-14T11:42:09Z
2022-11-14T14:16:03Z
2022-11-14T14:13:22Z
MEMBER
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5233', 'html_url': 'https://github.com/huggingface/datasets/pull/5233', 'diff_url': 'https://github.com/huggingface/datasets/pull/5233.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5233.patch', 'merged_at': '2022-11-14T14:13:22Z'}
Allow to define a sharded iterable dataset
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5233/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5233/timeline
No information
No information
187
https://api.github.com/repos/huggingface/datasets/issues/5232
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5232/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5232/comments
https://api.github.com/repos/huggingface/datasets/issues/5232/events
https://github.com/huggingface/datasets/issues/5232
1,446,294,165
I_kwDODunzps5WNLKV
5,232
Incompatible dill versions in datasets 2.6.1
{'login': 'vinaykakade', 'id': 10574123, 'node_id': 'MDQ6VXNlcjEwNTc0MTIz', 'avatar_url': 'https://avatars.githubusercontent.com/u/10574123?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/vinaykakade', 'html_url': 'https://github.com/vinaykakade', 'followers_url': 'https://api.github.com/users/vinaykakade/followers', 'following_url': 'https://api.github.com/users/vinaykakade/following{/other_user}', 'gists_url': 'https://api.github.com/users/vinaykakade/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/vinaykakade/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/vinaykakade/subscriptions', 'organizations_url': 'https://api.github.com/users/vinaykakade/orgs', 'repos_url': 'https://api.github.com/users/vinaykakade/repos', 'events_url': 'https://api.github.com/users/vinaykakade/events{/privacy}', 'received_events_url': 'https://api.github.com/users/vinaykakade/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
2
2022-11-12T06:46:23Z
2022-11-14T08:24:43Z
2022-11-14T08:07:59Z
NONE
No information
No information
No information
### Describe the bug datasets version 2.6.1 has a dependency on dill<0.3.6. This causes a conflict with dill>=0.3.6 used by multiprocess dependency in datasets 2.6.1 This issue is already fixed in https://github.com/huggingface/datasets/pull/5166/files, but not yet been released. Please release a new version of the datasets library to fix this. ### Steps to reproduce the bug 1. Create requirements.in with only dependency being datasets (or datasets[s3]) 2. Run pip-compile 3. The output is as follows: ``` Could not find a version that matches dill<0.3.6,>=0.3.6 (from datasets[s3]==2.6.1->-r requirements.in (line 1)) Tried: 0.2, 0.2, 0.2.1, 0.2.1, 0.2.2, 0.2.2, 0.2.3, 0.2.3, 0.2.4, 0.2.4, 0.2.5, 0.2.5, 0.2.6, 0.2.7, 0.2.7.1, 0.2.8, 0.2.8.1, 0.2.8.2, 0.2.9, 0.3.0, 0.3.1, 0.3.1.1, 0.3.2, 0.3.3, 0.3.3, 0.3.4, 0.3.4, 0.3.5, 0.3.5, 0.3.5.1, 0.3.5.1, 0.3.6, 0.3.6 Skipped pre-versions: 0.1a1, 0.2a1, 0.2a1, 0.2b1, 0.2b1 There are incompatible versions in the resolved dependencies: dill<0.3.6 (from datasets[s3]==2.6.1->-r requirements.in (line 1)) dill>=0.3.6 (from multiprocess==0.70.14->datasets[s3]==2.6.1->-r requirements.in (line 1)) ``` ### Expected behavior pip-compile produces requirements.txt without any conflicts ### Environment info datasets version 2.6.1
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5232/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5232/timeline
No information
completed
188
https://api.github.com/repos/huggingface/datasets/issues/5231
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5231/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5231/comments
https://api.github.com/repos/huggingface/datasets/issues/5231/events
https://github.com/huggingface/datasets/issues/5231
1,445,883,267
I_kwDODunzps5WLm2D
5,231
Using `set_format(type='torch', columns=columns)` makes Array2D/3D columns stop formatting correctly
{'login': 'plamb-viso', 'id': 99206017, 'node_id': 'U_kgDOBenDgQ', 'avatar_url': 'https://avatars.githubusercontent.com/u/99206017?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/plamb-viso', 'html_url': 'https://github.com/plamb-viso', 'followers_url': 'https://api.github.com/users/plamb-viso/followers', 'following_url': 'https://api.github.com/users/plamb-viso/following{/other_user}', 'gists_url': 'https://api.github.com/users/plamb-viso/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/plamb-viso/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/plamb-viso/subscriptions', 'organizations_url': 'https://api.github.com/users/plamb-viso/orgs', 'repos_url': 'https://api.github.com/users/plamb-viso/repos', 'events_url': 'https://api.github.com/users/plamb-viso/events{/privacy}', 'received_events_url': 'https://api.github.com/users/plamb-viso/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
1
2022-11-11T18:54:36Z
2022-11-11T20:42:29Z
2022-11-11T18:59:50Z
NONE
No information
No information
No information
I have a Dataset with two Features defined as follows: ``` 'image': Array3D(dtype="int64", shape=(3, 224, 224)), 'bbox': Array2D(dtype="int64", shape=(512, 4)), ``` On said dataset, if I `dataset.set_format(type='torch')` and then use the dataset in a dataloader, these columns are correctly cast to Tensors of (batch_size, 3, 224, 244) for example. However, if I `dataset.set_format(type='torch', columns=['image', 'bbox'])` these columns are cast to Lists of tensors and miss the batch size completely (the 3 dimension is the list length). I'm currently digging through datasets formatting code to try and find out why, but was curious if someone knew an immediate solution for this.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5231/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5231/timeline
No information
completed
189
https://api.github.com/repos/huggingface/datasets/issues/5230
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5230/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5230/comments
https://api.github.com/repos/huggingface/datasets/issues/5230/events
https://github.com/huggingface/datasets/issues/5230
1,445,507,580
I_kwDODunzps5WKLH8
5,230
dataclasses error when importing the library in python 3.11
{'login': 'yonikremer', 'id': 76044840, 'node_id': 'MDQ6VXNlcjc2MDQ0ODQw', 'avatar_url': 'https://avatars.githubusercontent.com/u/76044840?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/yonikremer', 'html_url': 'https://github.com/yonikremer', 'followers_url': 'https://api.github.com/users/yonikremer/followers', 'following_url': 'https://api.github.com/users/yonikremer/following{/other_user}', 'gists_url': 'https://api.github.com/users/yonikremer/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/yonikremer/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/yonikremer/subscriptions', 'organizations_url': 'https://api.github.com/users/yonikremer/orgs', 'repos_url': 'https://api.github.com/users/yonikremer/repos', 'events_url': 'https://api.github.com/users/yonikremer/events{/privacy}', 'received_events_url': 'https://api.github.com/users/yonikremer/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
{'login': 'mariosasko', 'id': 47462742, 'node_id': 'MDQ6VXNlcjQ3NDYyNzQy', 'avatar_url': 'https://avatars.githubusercontent.com/u/47462742?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariosasko', 'html_url': 'https://github.com/mariosasko', 'followers_url': 'https://api.github.com/users/mariosasko/followers', 'following_url': 'https://api.github.com/users/mariosasko/following{/other_user}', 'gists_url': 'https://api.github.com/users/mariosasko/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/mariosasko/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/mariosasko/subscriptions', 'organizations_url': 'https://api.github.com/users/mariosasko/orgs', 'repos_url': 'https://api.github.com/users/mariosasko/repos', 'events_url': 'https://api.github.com/users/mariosasko/events{/privacy}', 'received_events_url': 'https://api.github.com/users/mariosasko/received_events', 'type': 'User', 'site_admin': False}
[{'login': 'mariosasko', 'id': 47462742, 'node_id': 'MDQ6VXNlcjQ3NDYyNzQy', 'avatar_url': 'https://avatars.githubusercontent.com/u/47462742?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariosasko', 'html_url': 'https://github.com/mariosasko', 'followers_url': 'https://api.github.com/users/mariosasko/followers', 'following_url': 'https://api.github.com/users/mariosasko/following{/other_user}', 'gists_url': 'https://api.github.com/users/mariosasko/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/mariosasko/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/mariosasko/subscriptions', 'organizations_url': 'https://api.github.com/users/mariosasko/orgs', 'repos_url': 'https://api.github.com/users/mariosasko/repos', 'events_url': 'https://api.github.com/users/mariosasko/events{/privacy}', 'received_events_url': 'https://api.github.com/users/mariosasko/received_events', 'type': 'User', 'site_admin': False}]
No information
2
2022-11-11T13:53:49Z
2022-11-14T20:51:44Z
2022-11-14T15:27:37Z
NONE
No information
No information
No information
### Describe the bug When I import datasets using python 3.11 the dataclasses standard library raises the following error: `ValueError: mutable default <class 'datasets.utils.version.Version'> for field version is not allowed: use default_factory` When I tried to import the library using the following jupyter notebook: ``` %%bash # create python 3.11 conda env conda create --yes --quiet -n myenv -c conda-forge python=3.11 # activate is source activate myenv # install pyarrow /opt/conda/envs/myenv/bin/python -m pip install --quiet --extra-index-url https://pypi.fury.io/arrow-nightlies/ \ --prefer-binary --pre pyarrow # install datasets /opt/conda/envs/myenv/bin/python -m pip install --quiet datasets ``` ``` # create a python file that only imports datasets with open("import_datasets.py", 'w') as f: f.write("import datasets") # run it with the env !/opt/conda/envs/myenv/bin/python import_datasets.py ``` I get the following error: ``` Traceback (most recent call last): File "/kaggle/working/import_datasets.py", line 1, in <module> import datasets File "/opt/conda/envs/myenv/lib/python3.11/site-packages/datasets/__init__.py", line 45, in <module> from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder File "/opt/conda/envs/myenv/lib/python3.11/site-packages/datasets/builder.py", line 91, in <module> @dataclass ^^^^^^^^^ File "/opt/conda/envs/myenv/lib/python3.11/dataclasses.py", line 1221, in dataclass return wrap(cls) ^^^^^^^^^ File "/opt/conda/envs/myenv/lib/python3.11/dataclasses.py", line 1211, in wrap return _process_class(cls, init, repr, eq, order, unsafe_hash, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/myenv/lib/python3.11/dataclasses.py", line 959, in _process_class cls_fields.append(_get_field(cls, name, type, kw_only)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/myenv/lib/python3.11/dataclasses.py", line 816, in _get_field raise ValueError(f'mutable default {type(f.default)} for field ' ValueError: mutable default <class 'datasets.utils.version.Version'> for field version is not allowed: use default_factory ``` This is probably due to one of the following changes in the [dataclasses standard library](https://docs.python.org/3/library/dataclasses.html) in version 3.11: 1. Changed in version 3.11: Instead of looking for and disallowing objects of type list, dict, or set, unhashable objects are now not allowed as default values. Unhashability is used to approximate mutability. 2. fields may optionally specify a default value, using normal Python syntax: ``` @dataclass class C: a: int # 'a' has no default value b: int = 0 # assign a default value for 'b' In this example, both a and b will be included in the added __init__() method, which will be defined as: def __init__(self, a: int, b: int = 0): ``` 3. Changed in version 3.11: If a field name is already included in the __slots__ of a base class, it will not be included in the generated __slots__ to prevent [overriding them](https://docs.python.org/3/reference/datamodel.html#datamodel-note-slots). Therefore, do not use __slots__ to retrieve the field names of a dataclass. Use [fields()](https://docs.python.org/3/library/dataclasses.html#dataclasses.fields) instead. To be able to determine inherited slots, base class __slots__ may be any iterable, but not an iterator. 4. weakref_slot: If true (the default is False), add a slot named β€œ__weakref__”, which is required to make an instance weakref-able. It is an error to specify weakref_slot=True without also specifying slots=True. [TypeError](https://docs.python.org/3/library/exceptions.html#TypeError) will be raised if a field without a default value follows a field with a default value. This is true whether this occurs in a single class, or as a result of class inheritance. ### Steps to reproduce the bug Steps to reproduce the behavior: 1. go to [the notebook in kaggle](https://www.kaggle.com/yonikremer/repreducing-issue) 2. rub both of the cells ### Expected behavior I'm expecting no issues. This error should not occur. ### Environment info kaggle kernels, with default settings: pin to original environment, no accelerator.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5230/reactions', 'total_count': 2, '+1': 2, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5230/timeline
No information
completed
190
https://api.github.com/repos/huggingface/datasets/issues/5229
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5229/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5229/comments
https://api.github.com/repos/huggingface/datasets/issues/5229/events
https://github.com/huggingface/datasets/issues/5229
1,445,121,028
I_kwDODunzps5WIswE
5,229
Type error when calling `map` over dataset containing 0-d tensors
{'login': 'phipsgabler', 'id': 7878215, 'node_id': 'MDQ6VXNlcjc4NzgyMTU=', 'avatar_url': 'https://avatars.githubusercontent.com/u/7878215?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/phipsgabler', 'html_url': 'https://github.com/phipsgabler', 'followers_url': 'https://api.github.com/users/phipsgabler/followers', 'following_url': 'https://api.github.com/users/phipsgabler/following{/other_user}', 'gists_url': 'https://api.github.com/users/phipsgabler/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/phipsgabler/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/phipsgabler/subscriptions', 'organizations_url': 'https://api.github.com/users/phipsgabler/orgs', 'repos_url': 'https://api.github.com/users/phipsgabler/repos', 'events_url': 'https://api.github.com/users/phipsgabler/events{/privacy}', 'received_events_url': 'https://api.github.com/users/phipsgabler/received_events', 'type': 'User', 'site_admin': False}
[]
open
false
No information
[]
No information
1
2022-11-11T08:27:28Z
2022-11-29T19:24:03Z
No information
NONE
No information
No information
No information
### Describe the bug 0-dimensional tensors in a dataset lead to `TypeError: iteration over a 0-d array` when calling `map`. It is easy to generate such tensors by using `.with_format("...")` on the whole dataset. ### Steps to reproduce the bug ``` ds = datasets.Dataset.from_list([{"a": 1}, {"a": 1}]).with_format("torch") ds.map(None) ``` ### Expected behavior Getting back `ds` without errors. ### Environment info Python 3.10.8 datasets 2.6. torch 1.13.0
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5229/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5229/timeline
No information
No information
191
https://api.github.com/repos/huggingface/datasets/issues/5228
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5228/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5228/comments
https://api.github.com/repos/huggingface/datasets/issues/5228/events
https://github.com/huggingface/datasets/issues/5228
1,444,763,105
I_kwDODunzps5WHVXh
5,228
Loading a dataset from the hub fails if you happen to have a folder of the same name
{'login': 'dakinggg', 'id': 43149077, 'node_id': 'MDQ6VXNlcjQzMTQ5MDc3', 'avatar_url': 'https://avatars.githubusercontent.com/u/43149077?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/dakinggg', 'html_url': 'https://github.com/dakinggg', 'followers_url': 'https://api.github.com/users/dakinggg/followers', 'following_url': 'https://api.github.com/users/dakinggg/following{/other_user}', 'gists_url': 'https://api.github.com/users/dakinggg/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/dakinggg/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/dakinggg/subscriptions', 'organizations_url': 'https://api.github.com/users/dakinggg/orgs', 'repos_url': 'https://api.github.com/users/dakinggg/repos', 'events_url': 'https://api.github.com/users/dakinggg/events{/privacy}', 'received_events_url': 'https://api.github.com/users/dakinggg/received_events', 'type': 'User', 'site_admin': False}
[]
open
false
No information
[]
No information
2
2022-11-11T00:51:54Z
2022-11-14T18:17:34Z
No information
NONE
No information
No information
No information
### Describe the bug I'm not 100% sure this should be considered a bug, but it was certainly annoying to figure out the cause of. And perhaps I am just missing a specific argument needed to avoid this conflict. Basically I had a situation where multiple workers were downloading different parts of the glue dataset and then training on them. Additionally, they were writing their checkpoints to a folder called `glue`. This meant that once one worker had created the `glue` folder to write checkpoints to, the next worker to try to load a glue dataset would fail as shown in the minimal repro below. I'm not sure what the solution would be since I'm not super familiar with the `datasets` code, but I would expect `load_dataset` to not crash just because i have a local folder with the same name as a dataset from the hub. ### Steps to reproduce the bug ``` In [1]: import datasets In [2]: rte = datasets.load_dataset('glue', 'rte') Downloading and preparing dataset glue/rte to /Users/danielking/.cache/huggingface/datasets/glue/rte/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad... Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 697k/697k [00:00<00:00, 6.08MB/s] Dataset glue downloaded and prepared to /Users/danielking/.cache/huggingface/datasets/glue/rte/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad. Subsequent calls will reuse this data. 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 773.81it/s] In [3]: import os In [4]: os.mkdir('glue') In [5]: rte = datasets.load_dataset('glue', 'rte') --------------------------------------------------------------------------- EmptyDatasetError Traceback (most recent call last) <ipython-input-5-0d6b9ad8bbd0> in <cell line: 1>() ----> 1 rte = datasets.load_dataset('glue', 'rte') ~/miniconda3/envs/composer/lib/python3.9/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1717 1718 # Create a dataset builder -> 1719 builder_instance = load_dataset_builder( 1720 path=path, 1721 name=name, ~/miniconda3/envs/composer/lib/python3.9/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs) 1495 download_config = download_config.copy() if download_config else DownloadConfig() 1496 download_config.use_auth_token = use_auth_token -> 1497 dataset_module = dataset_module_factory( 1498 path, 1499 revision=revision, ~/miniconda3/envs/composer/lib/python3.9/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1152 ).get_module() 1153 elif os.path.isdir(path): -> 1154 return LocalDatasetModuleFactoryWithoutScript( 1155 path, data_dir=data_dir, data_files=data_files, download_mode=download_mode 1156 ).get_module() ~/miniconda3/envs/composer/lib/python3.9/site-packages/datasets/load.py in get_module(self) 624 base_path = os.path.join(self.path, self.data_dir) if self.data_dir else self.path 625 patterns = ( --> 626 sanitize_patterns(self.data_files) if self.data_files is not None else get_data_patterns_locally(base_path) 627 ) 628 data_files = DataFilesDict.from_local_or_remote( ~/miniconda3/envs/composer/lib/python3.9/site-packages/datasets/data_files.py in get_data_patterns_locally(base_path) 458 return _get_data_files_patterns(resolver) 459 except FileNotFoundError: --> 460 raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None 461 462 EmptyDatasetError: The directory at glue doesn't contain any data files ``` ### Expected behavior Dataset is still able to be loaded from the hub even if I have a local folder with the same name. ### Environment info datasets version: 2.6.1
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5228/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5228/timeline
No information
No information
192
https://api.github.com/repos/huggingface/datasets/issues/5227
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5227/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5227/comments
https://api.github.com/repos/huggingface/datasets/issues/5227/events
https://github.com/huggingface/datasets/issues/5227
1,444,620,094
I_kwDODunzps5WGyc-
5,227
datasets.data_files.EmptyDatasetError: The directory at wikisql doesn't contain any data files
{'login': 'ScottM-wizard', 'id': 102275116, 'node_id': 'U_kgDOBhiYLA', 'avatar_url': 'https://avatars.githubusercontent.com/u/102275116?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/ScottM-wizard', 'html_url': 'https://github.com/ScottM-wizard', 'followers_url': 'https://api.github.com/users/ScottM-wizard/followers', 'following_url': 'https://api.github.com/users/ScottM-wizard/following{/other_user}', 'gists_url': 'https://api.github.com/users/ScottM-wizard/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/ScottM-wizard/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/ScottM-wizard/subscriptions', 'organizations_url': 'https://api.github.com/users/ScottM-wizard/orgs', 'repos_url': 'https://api.github.com/users/ScottM-wizard/repos', 'events_url': 'https://api.github.com/users/ScottM-wizard/events{/privacy}', 'received_events_url': 'https://api.github.com/users/ScottM-wizard/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
1
2022-11-10T21:57:06Z
2022-11-10T22:05:43Z
2022-11-10T22:05:43Z
NONE
No information
No information
No information
### Describe the bug From these lines: from datasets import list_datasets, load_dataset dataset = load_dataset("wikisql","binary") I get error message: datasets.data_files.EmptyDatasetError: The directory at wikisql doesn't contain any data files And yet the 'wikisql' is reported to exist via the list_datasets(). Any help appreciated. ### Steps to reproduce the bug From these lines: from datasets import list_datasets, load_dataset dataset = load_dataset("wikisql","binary") I get error message: datasets.data_files.EmptyDatasetError: The directory at wikisql doesn't contain any data files And yet the 'wikisql' is reported to exist via the list_datasets(). Any help appreciated. ### Expected behavior Dataset should load. This same code used to work. ### Environment info Mac OS
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5227/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5227/timeline
No information
completed
193
https://api.github.com/repos/huggingface/datasets/issues/5226
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5226/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5226/comments
https://api.github.com/repos/huggingface/datasets/issues/5226/events
https://github.com/huggingface/datasets/issues/5226
1,444,385,148
I_kwDODunzps5WF5F8
5,226
Q: Memory release when removing the column?
{'login': 'bayartsogt-ya', 'id': 43239645, 'node_id': 'MDQ6VXNlcjQzMjM5NjQ1', 'avatar_url': 'https://avatars.githubusercontent.com/u/43239645?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/bayartsogt-ya', 'html_url': 'https://github.com/bayartsogt-ya', 'followers_url': 'https://api.github.com/users/bayartsogt-ya/followers', 'following_url': 'https://api.github.com/users/bayartsogt-ya/following{/other_user}', 'gists_url': 'https://api.github.com/users/bayartsogt-ya/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/bayartsogt-ya/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/bayartsogt-ya/subscriptions', 'organizations_url': 'https://api.github.com/users/bayartsogt-ya/orgs', 'repos_url': 'https://api.github.com/users/bayartsogt-ya/repos', 'events_url': 'https://api.github.com/users/bayartsogt-ya/events{/privacy}', 'received_events_url': 'https://api.github.com/users/bayartsogt-ya/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
3
2022-11-10T18:35:27Z
2022-11-29T15:10:10Z
2022-11-29T15:10:10Z
NONE
No information
No information
No information
### Describe the bug How do I release memory when I use methods like `.remove_columns()` or `clear()` in notebooks? ```python from datasets import load_dataset common_voice = load_dataset("mozilla-foundation/common_voice_11_0", "ja", use_auth_token=True) # check memory -> RAM Used (GB): 0.704 / Total (GB) 33.670 common_voice = common_voice.remove_columns(column_names=common_voice.column_names['train']) common_voice.clear() # check memory -> RAM Used (GB): 0.705 / Total (GB) 33.670 ``` I tried `gc.collect()` but did not help ### Steps to reproduce the bug 1. load dataset 2. remove all the columns 3. check memory is reduced or not [link to reproduce](https://www.kaggle.com/code/bayartsogtya/huggingface-dataset-memory-issue/notebook?scriptVersionId=110630567) ### Expected behavior Memory released when I remove the column ### Environment info - `datasets` version: 2.1.0 - Platform: Linux-5.15.65+-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - PyArrow version: 8.0.0 - Pandas version: 1.3.5
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5226/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5226/timeline
No information
completed
194
https://api.github.com/repos/huggingface/datasets/issues/5225
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5225/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5225/comments
https://api.github.com/repos/huggingface/datasets/issues/5225/events
https://github.com/huggingface/datasets/issues/5225
1,444,305,183
I_kwDODunzps5WFlkf
5,225
Add video feature
{'login': 'nateraw', 'id': 32437151, 'node_id': 'MDQ6VXNlcjMyNDM3MTUx', 'avatar_url': 'https://avatars.githubusercontent.com/u/32437151?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/nateraw', 'html_url': 'https://github.com/nateraw', 'followers_url': 'https://api.github.com/users/nateraw/followers', 'following_url': 'https://api.github.com/users/nateraw/following{/other_user}', 'gists_url': 'https://api.github.com/users/nateraw/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/nateraw/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/nateraw/subscriptions', 'organizations_url': 'https://api.github.com/users/nateraw/orgs', 'repos_url': 'https://api.github.com/users/nateraw/repos', 'events_url': 'https://api.github.com/users/nateraw/events{/privacy}', 'received_events_url': 'https://api.github.com/users/nateraw/received_events', 'type': 'User', 'site_admin': False}
[{'id': 1935892871, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODcx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/enhancement', 'name': 'enhancement', 'color': 'a2eeef', 'default': True, 'description': 'New feature or request'}, {'id': 1935892884, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODg0', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/help%20wanted', 'name': 'help wanted', 'color': '008672', 'default': True, 'description': 'Extra attention is needed'}, {'id': 3608941089, 'node_id': 'LA_kwDODunzps7XHBIh', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/vision', 'name': 'vision', 'color': 'bfdadc', 'default': False, 'description': 'Vision datasets'}]
open
false
No information
[]
No information
4
2022-11-10T17:36:11Z
2022-11-30T03:39:29Z
No information
CONTRIBUTOR
No information
No information
No information
### Feature request Add a `Video` feature to the library so folks can include videos in their datasets. ### Motivation Being able to load Video data would be quite helpful. However, there are some challenges when it comes to videos: 1. Videos, unlike images, can end up being extremely large files 2. Often times when training video models, you need to do some very specific sampling. Videos might end up needing to be broken down into X number of clips used for training/inference 3. Videos have an additional audio stream, which must be accounted for 4. The feature needs to be able to encode/decode videos (with right video settings) from bytes. ### Your contribution I did work on this a while back in [this (now closed) PR](https://github.com/huggingface/datasets/pull/4532). It used a library I made called [encoded_video](https://github.com/nateraw/encoded-video), which is basically the utils from [pytorchvideo](https://github.com/facebookresearch/pytorchvideo), but without the `torch` dep. It included the ability to read/write from bytes, as we need to do here. We don't want to be using a sketchy library that I made as a dependency in this repo, though. Would love to use this issue as a place to: - brainstorm ideas on how to do this right - list ways/examples to work around it for now CC @sayakpaul @mariosasko @fcakyon
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5225/reactions', 'total_count': 3, '+1': 3, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5225/timeline
No information
No information
195
https://api.github.com/repos/huggingface/datasets/issues/5224
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5224/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5224/comments
https://api.github.com/repos/huggingface/datasets/issues/5224/events
https://github.com/huggingface/datasets/issues/5224
1,443,640,867
I_kwDODunzps5WDDYj
5,224
Seems to freeze when loading audio dataset with wav files from local folder
{'login': 'uriii3', 'id': 45894267, 'node_id': 'MDQ6VXNlcjQ1ODk0MjY3', 'avatar_url': 'https://avatars.githubusercontent.com/u/45894267?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/uriii3', 'html_url': 'https://github.com/uriii3', 'followers_url': 'https://api.github.com/users/uriii3/followers', 'following_url': 'https://api.github.com/users/uriii3/following{/other_user}', 'gists_url': 'https://api.github.com/users/uriii3/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/uriii3/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/uriii3/subscriptions', 'organizations_url': 'https://api.github.com/users/uriii3/orgs', 'repos_url': 'https://api.github.com/users/uriii3/repos', 'events_url': 'https://api.github.com/users/uriii3/events{/privacy}', 'received_events_url': 'https://api.github.com/users/uriii3/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
2
2022-11-10T10:29:31Z
2022-11-22T11:24:19Z
2022-11-22T11:24:19Z
NONE
No information
No information
No information
### Describe the bug I'm following the instructions in [https://huggingface.co/docs/datasets/audio_load#audiofolder-with-metadata](url) to be able to load a dataset from a local folder. I have everything into a folder, into a train folder and then the audios and csv. When I try to load the dataset and run from terminal, seems to work but then freezes with no apparent reason. The metadata.csv file contains a few columns but the important ones, `file_name` with the filename and `transcription` with the transcription are okay. The audios are `.wav` files, I don't know if that might be the problem (I will proceed to try to change them all to `.mp3` and try again). ### Steps to reproduce the bug The code I'm using: ```python from datasets import load_dataset dataset = load_dataset("audiofolder", data_dir="../archive/Dataset") dataset[0]["audio"] ``` The output I obtain: ``` Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 311135.43it/s] Using custom data configuration default-38d4546ffd010f3e Downloading and preparing dataset audiofolder/default to /Users/mine/.cache/huggingface/datasets/audiofolder/default-38d4546ffd010f3e/0.0.0/6cbdd16f8688354c63b4e2a36e1585d05de285023ee6443ffd71c4182055c0fc... Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 166467.72it/s] Using custom data configuration default-38d4546ffd010f3e Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 187772.74it/s] Using custom data configuration default-38d4546ffd010f3e Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 59623.71it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 138090.55it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 106065.64it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 56036.38it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 74004.24it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 162343.45it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 101881.23it/s] Using custom data configuration default-38d4546ffd010f3e Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 60145.67it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 80890.02it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 54036.67it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 95851.09it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 155897.00it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 137656.96it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 131230.81it/s] Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e ``` And then here it just freezes and nothing more happens. ### Expected behavior Load the dataset. ### Environment info Datasets version: datasets 2.6.1 pypi_0 pypi
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5224/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5224/timeline
No information
completed
196
https://api.github.com/repos/huggingface/datasets/issues/5223
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5223/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5223/comments
https://api.github.com/repos/huggingface/datasets/issues/5223/events
https://github.com/huggingface/datasets/pull/5223
1,442,610,658
PR_kwDODunzps5CjT9Z
5,223
Add SQL guide
{'login': 'stevhliu', 'id': 59462357, 'node_id': 'MDQ6VXNlcjU5NDYyMzU3', 'avatar_url': 'https://avatars.githubusercontent.com/u/59462357?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/stevhliu', 'html_url': 'https://github.com/stevhliu', 'followers_url': 'https://api.github.com/users/stevhliu/followers', 'following_url': 'https://api.github.com/users/stevhliu/following{/other_user}', 'gists_url': 'https://api.github.com/users/stevhliu/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/stevhliu/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/stevhliu/subscriptions', 'organizations_url': 'https://api.github.com/users/stevhliu/orgs', 'repos_url': 'https://api.github.com/users/stevhliu/repos', 'events_url': 'https://api.github.com/users/stevhliu/events{/privacy}', 'received_events_url': 'https://api.github.com/users/stevhliu/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
4
2022-11-09T19:10:27Z
2022-11-15T17:40:25Z
2022-11-15T17:40:21Z
MEMBER
No information
False
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5223', 'html_url': 'https://github.com/huggingface/datasets/pull/5223', 'diff_url': 'https://github.com/huggingface/datasets/pull/5223.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5223.patch', 'merged_at': '2022-11-15T17:40:21Z'}
This PR adapts @nateraw's awesome SQL notebook as a guide for the docs!
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5223/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5223/timeline
No information
No information
197
https://api.github.com/repos/huggingface/datasets/issues/5222
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5222/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5222/comments
https://api.github.com/repos/huggingface/datasets/issues/5222/events
https://github.com/huggingface/datasets/issues/5222
1,442,412,507
I_kwDODunzps5V-Xfb
5,222
HuggingFace website is incorrectly reporting that my datasets are pickled
{'login': 'ProGamerGov', 'id': 10626398, 'node_id': 'MDQ6VXNlcjEwNjI2Mzk4', 'avatar_url': 'https://avatars.githubusercontent.com/u/10626398?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/ProGamerGov', 'html_url': 'https://github.com/ProGamerGov', 'followers_url': 'https://api.github.com/users/ProGamerGov/followers', 'following_url': 'https://api.github.com/users/ProGamerGov/following{/other_user}', 'gists_url': 'https://api.github.com/users/ProGamerGov/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/ProGamerGov/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/ProGamerGov/subscriptions', 'organizations_url': 'https://api.github.com/users/ProGamerGov/orgs', 'repos_url': 'https://api.github.com/users/ProGamerGov/repos', 'events_url': 'https://api.github.com/users/ProGamerGov/events{/privacy}', 'received_events_url': 'https://api.github.com/users/ProGamerGov/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
4
2022-11-09T16:41:16Z
2022-11-09T18:10:46Z
2022-11-09T18:06:57Z
NONE
No information
No information
No information
### Describe the bug HuggingFace is incorrectly reporting that my datasets are pickled. They are not picked, they are simple ZIP files containing PNG images. Hopefully this is the right location to report this bug. ### Steps to reproduce the bug Inspect my dataset respository here: https://huggingface.co/datasets/ProGamerGov/StableDiffusion-v1-5-Regularization-Images ### Expected behavior They should not be reported as being pickled. ### Environment info N/A
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5222/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5222/timeline
No information
completed
198
https://api.github.com/repos/huggingface/datasets/issues/5221
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5221/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5221/comments
https://api.github.com/repos/huggingface/datasets/issues/5221/events
https://github.com/huggingface/datasets/issues/5221
1,442,309,094
I_kwDODunzps5V9-Pm
5,221
Cannot push
{'login': 'bayartsogt-ya', 'id': 43239645, 'node_id': 'MDQ6VXNlcjQzMjM5NjQ1', 'avatar_url': 'https://avatars.githubusercontent.com/u/43239645?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/bayartsogt-ya', 'html_url': 'https://github.com/bayartsogt-ya', 'followers_url': 'https://api.github.com/users/bayartsogt-ya/followers', 'following_url': 'https://api.github.com/users/bayartsogt-ya/following{/other_user}', 'gists_url': 'https://api.github.com/users/bayartsogt-ya/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/bayartsogt-ya/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/bayartsogt-ya/subscriptions', 'organizations_url': 'https://api.github.com/users/bayartsogt-ya/orgs', 'repos_url': 'https://api.github.com/users/bayartsogt-ya/repos', 'events_url': 'https://api.github.com/users/bayartsogt-ya/events{/privacy}', 'received_events_url': 'https://api.github.com/users/bayartsogt-ya/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
2
2022-11-09T15:32:05Z
2022-11-10T18:11:21Z
2022-11-10T18:11:11Z
NONE
No information
No information
No information
### Describe the bug I am facing the issue when I try to push the tar.gz file around 11G to HUB. ``` (venv) ╭─laptop@laptop ~/PersonalProjects/data/ulaanbal_v0 β€Ήmain●› ╰─$ du -sh * 4.0K README.md 13G data 516K test.jsonl 18M train.jsonl 4.0K ulaanbal_v0.py 11G ulaanbal_v0.tar.gz 452K validation.jsonl (venv) ╭─laptop@laptop~/PersonalProjects/data/ulaanbal_v0 β€Ήmain●› ╰─$ git add ulaanbal_v0.tar.gz && git commit -m 'large version' (venv) ╭─laptop@laptop ~/PersonalProjects/data/ulaanbal_v0 β€Ήmain●› ╰─$ git push EOFoading LFS objects: 0% (0/1), 0 B | 0 B/s Uploading LFS objects: 0% (0/1), 0 B | 0 B/s, done. error: failed to push some refs to 'https://huggingface.co/datasets/bayartsogt/ulaanbal_v0' ``` I have already tried pushing a small version of this and it was working fine. So my guess it is probably because of the big file. Following I run before the commit: ``` ╰─$ git lfs install ╰─$ huggingface-cli lfs-enable-largefiles . ``` ### Steps to reproduce the bug Create a private dataset on huggingface and push 12G tar.gz file ### Expected behavior To be pushed with no issue ### Environment info - `datasets` version: 2.6.1 - Platform: Darwin-21.6.0-x86_64-i386-64bit - Python version: 3.7.11 - PyArrow version: 10.0.0 - Pandas version: 1.3.5
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5221/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5221/timeline
No information
completed
199
https://api.github.com/repos/huggingface/datasets/issues/5220
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5220/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5220/comments
https://api.github.com/repos/huggingface/datasets/issues/5220/events
https://github.com/huggingface/datasets/issues/5220
1,441,664,377
I_kwDODunzps5V7g15
5,220
Implicit type conversion of lists in to_pandas
{'login': 'sanderland', 'id': 48946947, 'node_id': 'MDQ6VXNlcjQ4OTQ2OTQ3', 'avatar_url': 'https://avatars.githubusercontent.com/u/48946947?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/sanderland', 'html_url': 'https://github.com/sanderland', 'followers_url': 'https://api.github.com/users/sanderland/followers', 'following_url': 'https://api.github.com/users/sanderland/following{/other_user}', 'gists_url': 'https://api.github.com/users/sanderland/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/sanderland/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/sanderland/subscriptions', 'organizations_url': 'https://api.github.com/users/sanderland/orgs', 'repos_url': 'https://api.github.com/users/sanderland/repos', 'events_url': 'https://api.github.com/users/sanderland/events{/privacy}', 'received_events_url': 'https://api.github.com/users/sanderland/received_events', 'type': 'User', 'site_admin': False}
[]
closed
false
No information
[]
No information
2
2022-11-09T08:40:18Z
2022-11-10T16:12:26Z
2022-11-10T16:12:26Z
CONTRIBUTOR
No information
No information
No information
### Describe the bug ``` ds = Dataset.from_list([{'a':[1,2,3]}]) ds.to_pandas().a.values[0] ``` Results in `array([1, 2, 3])` -- a rather unexpected conversion of types which made downstream tools expecting lists not happy. ### Steps to reproduce the bug See snippet ### Expected behavior Keep the original type ### Environment info datasets 2.6.1 python 3.8.10
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5220/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5220/timeline
No information
completed