url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 1.24B
2.76B
| node_id
stringlengths 18
19
| number
int64 4.35k
7.35k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
3
| milestone
dict | comments
int64 0
49
| created_at
timestamp[ms] | updated_at
timestamp[ms] | closed_at
timestamp[ms] | author_association
stringclasses 4
values | active_lock_reason
null | body
stringlengths 1
47.9k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7347 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7347/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7347/comments | https://api.github.com/repos/huggingface/datasets/issues/7347/events | https://github.com/huggingface/datasets/issues/7347 | 2,760,282,339 | I_kwDODunzps6khpDj | 7,347 | Converting Arrow to WebDataset TAR Format for Offline Use | {
"login": "katie312",
"id": 91370128,
"node_id": "MDQ6VXNlcjkxMzcwMTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/91370128?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/katie312",
"html_url": "https://github.com/katie312",
"followers_url": "https://api.github.com/users/katie312/followers",
"following_url": "https://api.github.com/users/katie312/following{/other_user}",
"gists_url": "https://api.github.com/users/katie312/gists{/gist_id}",
"starred_url": "https://api.github.com/users/katie312/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/katie312/subscriptions",
"organizations_url": "https://api.github.com/users/katie312/orgs",
"repos_url": "https://api.github.com/users/katie312/repos",
"events_url": "https://api.github.com/users/katie312/events{/privacy}",
"received_events_url": "https://api.github.com/users/katie312/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 4 | 2024-12-27T01:40:44 | 2024-12-31T17:38:00 | 2024-12-28T15:38:03 | NONE | null | ### Feature request
Hi,
I've downloaded an Arrow-formatted dataset offline using the hugggingface's datasets library by:
```
import json
from datasets import load_dataset
dataset = load_dataset("pixparse/cc3m-wds")
dataset.save_to_disk("./cc3m_1")
```
now I need to convert it to WebDataset's TAR format for offline data ingestion.
Is there a straightforward method to achieve this conversion without an internet connection? Can I simply convert it by
```
tar -cvf
```
btw, when I tried:
```
import webdataset as wds
from huggingface_hub import get_token
from torch.utils.data import DataLoader
hf_token = get_token()
url = "https://huggingface.co/datasets/timm/imagenet-12k-wds/resolve/main/imagenet12k-train-{{0000..1023}}.tar"
url = f"pipe:curl -s -L {url} -H 'Authorization:Bearer {hf_token}'"
dataset = wds.WebDataset(url).decode()
dataset.save_to_disk("./cc3m_webdataset")
```
error occured:
```
AttributeError: 'WebDataset' object has no attribute 'save_to_disk'
```
Thanks a lot!
### Motivation
Converting Arrow to WebDataset TAR Format
### Your contribution
No clue yet | {
"login": "katie312",
"id": 91370128,
"node_id": "MDQ6VXNlcjkxMzcwMTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/91370128?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/katie312",
"html_url": "https://github.com/katie312",
"followers_url": "https://api.github.com/users/katie312/followers",
"following_url": "https://api.github.com/users/katie312/following{/other_user}",
"gists_url": "https://api.github.com/users/katie312/gists{/gist_id}",
"starred_url": "https://api.github.com/users/katie312/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/katie312/subscriptions",
"organizations_url": "https://api.github.com/users/katie312/orgs",
"repos_url": "https://api.github.com/users/katie312/repos",
"events_url": "https://api.github.com/users/katie312/events{/privacy}",
"received_events_url": "https://api.github.com/users/katie312/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7347/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7347/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7346 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7346/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7346/comments | https://api.github.com/repos/huggingface/datasets/issues/7346/events | https://github.com/huggingface/datasets/issues/7346 | 2,758,752,118 | I_kwDODunzps6kbzd2 | 7,346 | OSError: Invalid flatbuffers message. | {
"login": "antecede",
"id": 46232487,
"node_id": "MDQ6VXNlcjQ2MjMyNDg3",
"avatar_url": "https://avatars.githubusercontent.com/u/46232487?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/antecede",
"html_url": "https://github.com/antecede",
"followers_url": "https://api.github.com/users/antecede/followers",
"following_url": "https://api.github.com/users/antecede/following{/other_user}",
"gists_url": "https://api.github.com/users/antecede/gists{/gist_id}",
"starred_url": "https://api.github.com/users/antecede/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antecede/subscriptions",
"organizations_url": "https://api.github.com/users/antecede/orgs",
"repos_url": "https://api.github.com/users/antecede/repos",
"events_url": "https://api.github.com/users/antecede/events{/privacy}",
"received_events_url": "https://api.github.com/users/antecede/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-12-25T11:38:52 | 2024-12-25T12:03:13 | null | NONE | null | ### Describe the bug
When loading a large 2D data (1000 × 1152) with a large number of (2,000 data in this case) in `load_dataset`, the error message `OSError: Invalid flatbuffers message` is reported.
When only 300 pieces of data of this size (1000 × 1152) are stored, they can be loaded correctly.
When 2,000 2D arrays are stored in each file, about 100 files are generated, each with a file size of about 5-6GB. But when 300 2D arrays are stored in each file, **about 600 files are generated, which is too many files**.
### Steps to reproduce the bug
error:
```python
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
Cell In[2], line 4
1 from datasets import Dataset
2 from datasets import load_dataset
----> 4 real_dataset = load_dataset("arrow", data_files='tensorData/real_ResidueTensor/*', split="train")#.with_format("torch") # , split="train"
5 # sim_dataset = load_dataset("arrow", data_files='tensorData/sim_ResidueTensor/*', split="train").with_format("torch")
6 real_dataset
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/load.py:2151](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/load.py#line=2150), in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
2148 return builder_instance.as_streaming_dataset(split=split)
2150 # Download and prepare data
-> 2151 builder_instance.download_and_prepare(
2152 download_config=download_config,
2153 download_mode=download_mode,
2154 verification_mode=verification_mode,
2155 num_proc=num_proc,
2156 storage_options=storage_options,
2157 )
2159 # Build dataset for splits
2160 keep_in_memory = (
2161 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
2162 )
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/builder.py:924](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/builder.py#line=923), in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, dl_manager, base_path, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
922 if num_proc is not None:
923 prepare_split_kwargs["num_proc"] = num_proc
--> 924 self._download_and_prepare(
925 dl_manager=dl_manager,
926 verification_mode=verification_mode,
927 **prepare_split_kwargs,
928 **download_and_prepare_kwargs,
929 )
930 # Sync info
931 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/builder.py:978](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/builder.py#line=977), in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
976 split_dict = SplitDict(dataset_name=self.dataset_name)
977 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 978 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
980 # Checksums verification
981 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums:
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/packaged_modules/arrow/arrow.py:47](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/packaged_modules/arrow/arrow.py#line=46), in Arrow._split_generators(self, dl_manager)
45 with open(file, "rb") as f:
46 try:
---> 47 reader = pa.ipc.open_stream(f)
48 except pa.lib.ArrowInvalid:
49 reader = pa.ipc.open_file(f)
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/ipc.py:190](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/ipc.py#line=189), in open_stream(source, options, memory_pool)
171 def open_stream(source, *, options=None, memory_pool=None):
172 """
173 Create reader for Arrow streaming format.
174
(...)
188 A reader for the given source
189 """
--> 190 return RecordBatchStreamReader(source, options=options,
191 memory_pool=memory_pool)
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/ipc.py:52](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/ipc.py#line=51), in RecordBatchStreamReader.__init__(self, source, options, memory_pool)
50 def __init__(self, source, *, options=None, memory_pool=None):
51 options = _ensure_default_ipc_read_options(options)
---> 52 self._open(source, options=options, memory_pool=memory_pool)
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/ipc.pxi:1006](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/ipc.pxi#line=1005), in pyarrow.lib._RecordBatchStreamReader._open()
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/error.pxi:155](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/error.pxi#line=154), in pyarrow.lib.pyarrow_internal_check_status()
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/error.pxi:92](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/error.pxi#line=91), in pyarrow.lib.check_status()
OSError: Invalid flatbuffers message.
```
reproduce:Here is just an example result, the real 2D matrix is the output of the ESM large model, and the matrix size is approximate
```python
import numpy as np
import pyarrow as pa
random_arrays_list = [np.random.rand(1000, 1152) for _ in range(2000)]
table = pa.Table.from_pydict({
'tensor': [tensor.tolist() for tensor in random_arrays_list]
})
import pyarrow.feather as feather
feather.write_feather(table, 'test.arrow')
from datasets import load_dataset
dataset = load_dataset("arrow", data_files='test.arrow', split="train")
```
### Expected behavior
`load_dataset` load the dataset as normal as `feather.read_feather`
```python
import pyarrow.feather as feather
feather.read_feather('tensorData/real_ResidueTensor/real_tensor_1.arrow')
```
Plus `load_dataset("parquet", data_files='test.arrow', split="train")` works fine
### Environment info
- `datasets` version: 3.2.0
- Platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.26.5
- PyArrow version: 18.1.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7346/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7346/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7345 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7345/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7345/comments | https://api.github.com/repos/huggingface/datasets/issues/7345/events | https://github.com/huggingface/datasets/issues/7345 | 2,758,585,709 | I_kwDODunzps6kbK1t | 7,345 | Different behaviour of IterableDataset.map vs Dataset.map with remove_columns | {
"login": "vttrifonov",
"id": 12157034,
"node_id": "MDQ6VXNlcjEyMTU3MDM0",
"avatar_url": "https://avatars.githubusercontent.com/u/12157034?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vttrifonov",
"html_url": "https://github.com/vttrifonov",
"followers_url": "https://api.github.com/users/vttrifonov/followers",
"following_url": "https://api.github.com/users/vttrifonov/following{/other_user}",
"gists_url": "https://api.github.com/users/vttrifonov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vttrifonov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vttrifonov/subscriptions",
"organizations_url": "https://api.github.com/users/vttrifonov/orgs",
"repos_url": "https://api.github.com/users/vttrifonov/repos",
"events_url": "https://api.github.com/users/vttrifonov/events{/privacy}",
"received_events_url": "https://api.github.com/users/vttrifonov/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-12-25T07:36:48 | 2024-12-25T07:36:48 | null | NONE | null | ### Describe the bug
The following code
```python
import datasets as hf
ds1 = hf.Dataset.from_list([{'i': i} for i in [0,1]])
#ds1 = ds1.to_iterable_dataset()
ds2 = ds1.map(
lambda i: {'i': i+1},
input_columns = ['i'],
remove_columns = ['i']
)
list(ds2)
```
produces
```python
[{'i': 1}, {'i': 2}]
```
as expected. If the line that converts `ds1` to iterable is uncommented so that the `ds2` is a map of an `IterableDataset`, the result is
```python
[{},{}]
```
I expected the output to be the same as before. It seems that in the second case the removed column is not added back into the output.
The issue seems to be [here](https://github.com/huggingface/datasets/blob/6c6a82a573f946c4a81069f56446caed15cee9c2/src/datasets/iterable_dataset.py#L1093): the columns are removed after the mapping which is not what we want (or what the [documentation says](https://github.com/huggingface/datasets/blob/6c6a82a573f946c4a81069f56446caed15cee9c2/src/datasets/iterable_dataset.py#L2370)) because we want the columns removed from the transformed example but then added if the map produced them.
This is `datasets==3.2.0` and `python==3.10`
### Steps to reproduce the bug
see above
### Expected behavior
see above
### Environment info
see above | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7345/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7345/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7344 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7344/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7344/comments | https://api.github.com/repos/huggingface/datasets/issues/7344/events | https://github.com/huggingface/datasets/issues/7344 | 2,754,735,951 | I_kwDODunzps6kMe9P | 7,344 | HfHubHTTPError: 429 Client Error: Too Many Requests for URL when trying to access SlimPajama-627B or c4 on TPUs | {
"login": "clankur",
"id": 9397233,
"node_id": "MDQ6VXNlcjkzOTcyMzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9397233?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clankur",
"html_url": "https://github.com/clankur",
"followers_url": "https://api.github.com/users/clankur/followers",
"following_url": "https://api.github.com/users/clankur/following{/other_user}",
"gists_url": "https://api.github.com/users/clankur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clankur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clankur/subscriptions",
"organizations_url": "https://api.github.com/users/clankur/orgs",
"repos_url": "https://api.github.com/users/clankur/repos",
"events_url": "https://api.github.com/users/clankur/events{/privacy}",
"received_events_url": "https://api.github.com/users/clankur/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-12-22T16:30:07 | 2024-12-22T16:30:07 | null | NONE | null | ### Describe the bug
I am trying to run some trainings on Google's TPUs using Huggingface's DataLoader on [SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B) and [c4](https://huggingface.co/datasets/allenai/c4), but I end up running into `429 Client Error: Too Many Requests for URL` error when I call `load_dataset`. The even odder part is that I am able to sucessfully run trainings with the [wikitext dataset](https://huggingface.co/datasets/Salesforce/wikitext). Is there something I need to setup to specifically train with SlimPajama or C4 with TPUs because I am not clear why I am getting these errors.
### Steps to reproduce the bug
These are the commands you could run to produce the error below but you will require a ClearML account (you can create one [here](https://app.clear.ml/login?redirect=%2Fdashboard)) with a queue setup to run on Google TPUs
```bash
git clone https://github.com/clankur/muGPT.git
cd muGPT
python -m train --config-name=slim_v4-32_84m.yaml +training.queue={NAME_OF_CLEARML_QUEUE}
```
The error I see:
```
Traceback (most recent call last):
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/clearml/binding/hydra_bind.py", line 230, in _patched_task_function
return task_function(a_config, *a_args, **a_kwargs)
File "/home/clankur/.clearml/venvs-builds/3.10/task_repository/muGPT.git/train.py", line 1037, in main
main_contained(config, logger)
File "/home/clankur/.clearml/venvs-builds/3.10/task_repository/muGPT.git/train.py", line 840, in main_contained
loader = get_loader("train", config.training_data, config.training.tokens)
File "/home/clankur/.clearml/venvs-builds/3.10/task_repository/muGPT.git/input_loader.py", line 549, in get_loader
return HuggingFaceDataLoader(split, config, token_batch_params)
File "/home/clankur/.clearml/venvs-builds/3.10/task_repository/muGPT.git/input_loader.py", line 395, in __init__
self.dataset = load_dataset(
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/load.py", line 2112, in load_dataset
builder_instance = load_dataset_builder(
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/load.py", line 1798, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/load.py", line 1495, in dataset_module_factory
raise e1 from None
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/load.py", line 1479, in dataset_module_factory
).get_module()
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/load.py", line 1034, in get_module
else get_data_patterns(base_path, download_config=self.download_config)
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/data_files.py", line 457, in get_data_patterns
return _get_data_files_patterns(resolver)
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/data_files.py", line 248, in _get_data_files_patterns
data_files = pattern_resolver(pattern)
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/data_files.py", line 340, in resolve_pattern
for filepath, info in fs.glob(pattern, detail=True).items()
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 409, in glob
return super().glob(path, **kwargs)
File "/home/clankur/.clearml/venvs-builds/3.10/lib/python3.10/site-packages/fsspec/spec.py", line 602, in glob
allpaths = self.find(root, maxdepth=depth, withdirs=True, detail=True, **kwargs)
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 429, in find
out = self._ls_tree(path, recursive=True, refresh=refresh, revision=resolved_path.revision, **kwargs)
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 358, in _ls_tree
self._ls_tree(
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 375, in _ls_tree
for path_info in tree:
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 3080, in list_repo_tree
for path_info in paginate(path=tree_url, headers=headers, params={"recursive": recursive, "expand": expand}):
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/utils/_pagination.py", line 46, in paginate
hf_raise_for_status(r)
File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 477, in hf_raise_for_status
raise _format(HfHubHTTPError, str(e), response) from e
huggingface_hub.errors.HfHubHTTPError: 429 Client Error: Too Many Requests for url: https://huggingface.co/api/datasets/cerebras/SlimPajama-627B/tree/2d0accdd58c5d5511943ca1f5ff0e3eb5e293543?recursive=True&expand=True&cursor=ZXlKbWFXeGxYMjVoYldVaU9pSjBaWE4wTDJOb2RXNXJNUzlsZUdGdGNHeGxYMmh2YkdSdmRYUmZPVFEzTG1wemIyNXNMbnB6ZENKOTo2MjUw (Request ID: Root=1-67673de9-1413900606ede7712b08ef2c;1304c09c-3e69-4222-be14-f10ee709d49c)
maximum queue size reached
Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.
```
### Expected behavior
I'd expect the DataLoader to load from the SlimPajama-627B and c4 dataset without issue.
### Environment info
- `datasets` version: 2.14.4
- Platform: Linux-5.8.0-1035-gcp-x86_64-with-glibc2.31
- Python version: 3.10.16
- Huggingface_hub version: 0.26.5
- PyArrow version: 18.1.0
- Pandas version: 2.2.3 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7344/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7343 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7343/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7343/comments | https://api.github.com/repos/huggingface/datasets/issues/7343/events | https://github.com/huggingface/datasets/issues/7343 | 2,750,525,823 | I_kwDODunzps6j8bF_ | 7,343 | [Bug] Inconsistent behavior of data_files and data_dir in load_dataset method. | {
"login": "JasonCZH4",
"id": 74161960,
"node_id": "MDQ6VXNlcjc0MTYxOTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/74161960?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JasonCZH4",
"html_url": "https://github.com/JasonCZH4",
"followers_url": "https://api.github.com/users/JasonCZH4/followers",
"following_url": "https://api.github.com/users/JasonCZH4/following{/other_user}",
"gists_url": "https://api.github.com/users/JasonCZH4/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JasonCZH4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JasonCZH4/subscriptions",
"organizations_url": "https://api.github.com/users/JasonCZH4/orgs",
"repos_url": "https://api.github.com/users/JasonCZH4/repos",
"events_url": "https://api.github.com/users/JasonCZH4/events{/privacy}",
"received_events_url": "https://api.github.com/users/JasonCZH4/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-12-19T14:31:27 | 2024-12-19T14:31:27 | null | NONE | null | ### Describe the bug
Inconsistent operation of data_files and data_dir in load_dataset method.
### Steps to reproduce the bug
# First
I have three files, named 'train.json', 'val.json', 'test.json'.
Each one has a simple dict `{text:'aaa'}`.
Their path are `/data/train.json`, `/data/val.json`, `/data/test.json`
I load dataset with `data_files` argument:
```py
files = [os.path.join('./data',file) for file in os.listdir('./data')]
ds = load_dataset(
path='json',
data_files=files,)
```
And I get:
```py
DatasetDict({
train: Dataset({
features: ['text'],
num_rows: 3
})
})
```
However, If I load dataset with `data_dir` argument:
```py
ds = load_dataset(
path='json',
data_dir='./data',)
```
And I get:
```py
DatasetDict({
train: Dataset({
features: ['text'],
num_rows: 1
})
validation: Dataset({
features: ['text'],
num_rows: 1
})
test: Dataset({
features: ['text'],
num_rows: 1
})
})
```
Two results are not the same. Their behaviors are not equal, even if the statement [here](https://github.com/huggingface/datasets/blob/d0c152a979d91cc34b605c0298aebc650ab7dd27/src/datasets/load.py#L1790) said that their behaviors are equal.
# Second
If some filename include 'test' while others do not, `load_dataset` only return `test` dataset and others files are **abandoned**.
Given two files named `test.json` and `1.json`
Each one has a simple dict `{text:'aaa'}`.
I load the dataset using:
```py
ds = load_dataset(
path='json',
data_dir='./data',)
```
Only `test` is returned, `1.json` is missing:
```py
DatasetDict({
test: Dataset({
features: ['text'],
num_rows: 1
})
})
```
Things do not change even I manually set `split='train'`
### Expected behavior
1. Fix the above bugs.
2. Although the document says that load_dataset method will `Find which file goes into which split (e.g. train/test) based on file and directory names or on the YAML configuration`, I hope I can manually decide whether to do so. Sometimes users may accidentally put a `test` string in the filename but they just want a single `train` dataset. If the number of files in `data_dir` is huge, it's not easy to find out what cause the second situation metioned above.
### Environment info
datasets==3.2.0
Ubuntu18.84 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7343/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7343/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7342 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7342/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7342/comments | https://api.github.com/repos/huggingface/datasets/issues/7342/events | https://github.com/huggingface/datasets/pull/7342 | 2,749,572,310 | PR_kwDODunzps6FvgcK | 7,342 | Update LICENSE | {
"login": "eliebak",
"id": 97572401,
"node_id": "U_kgDOBdDWMQ",
"avatar_url": "https://avatars.githubusercontent.com/u/97572401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliebak",
"html_url": "https://github.com/eliebak",
"followers_url": "https://api.github.com/users/eliebak/followers",
"following_url": "https://api.github.com/users/eliebak/following{/other_user}",
"gists_url": "https://api.github.com/users/eliebak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eliebak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eliebak/subscriptions",
"organizations_url": "https://api.github.com/users/eliebak/orgs",
"repos_url": "https://api.github.com/users/eliebak/repos",
"events_url": "https://api.github.com/users/eliebak/events{/privacy}",
"received_events_url": "https://api.github.com/users/eliebak/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-12-19T08:17:50 | 2024-12-19T08:44:08 | 2024-12-19T08:44:08 | NONE | null | null | {
"login": "eliebak",
"id": 97572401,
"node_id": "U_kgDOBdDWMQ",
"avatar_url": "https://avatars.githubusercontent.com/u/97572401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliebak",
"html_url": "https://github.com/eliebak",
"followers_url": "https://api.github.com/users/eliebak/followers",
"following_url": "https://api.github.com/users/eliebak/following{/other_user}",
"gists_url": "https://api.github.com/users/eliebak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eliebak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eliebak/subscriptions",
"organizations_url": "https://api.github.com/users/eliebak/orgs",
"repos_url": "https://api.github.com/users/eliebak/repos",
"events_url": "https://api.github.com/users/eliebak/events{/privacy}",
"received_events_url": "https://api.github.com/users/eliebak/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7342/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7342/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7342",
"html_url": "https://github.com/huggingface/datasets/pull/7342",
"diff_url": "https://github.com/huggingface/datasets/pull/7342.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7342.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7341 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7341/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7341/comments | https://api.github.com/repos/huggingface/datasets/issues/7341/events | https://github.com/huggingface/datasets/pull/7341 | 2,745,658,561 | PR_kwDODunzps6FiGlt | 7,341 | minor video docs on how to install | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-12-17T18:06:17 | 2024-12-17T18:11:17 | 2024-12-17T18:11:15 | MEMBER | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7341/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7341/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7341",
"html_url": "https://github.com/huggingface/datasets/pull/7341",
"diff_url": "https://github.com/huggingface/datasets/pull/7341.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7341.patch",
"merged_at": "2024-12-17T18:11:14"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7340 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7340/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7340/comments | https://api.github.com/repos/huggingface/datasets/issues/7340/events | https://github.com/huggingface/datasets/pull/7340 | 2,745,473,274 | PR_kwDODunzps6FhdR2 | 7,340 | don't import soundfile in tests | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-12-17T16:49:55 | 2024-12-17T16:54:04 | 2024-12-17T16:50:24 | MEMBER | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7340/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7340/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7340",
"html_url": "https://github.com/huggingface/datasets/pull/7340",
"diff_url": "https://github.com/huggingface/datasets/pull/7340.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7340.patch",
"merged_at": "2024-12-17T16:50:24"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7339 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7339/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7339/comments | https://api.github.com/repos/huggingface/datasets/issues/7339/events | https://github.com/huggingface/datasets/pull/7339 | 2,745,460,060 | PR_kwDODunzps6FhaTl | 7,339 | Update CONTRIBUTING.md | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-12-17T16:45:25 | 2024-12-17T16:51:36 | 2024-12-17T16:46:30 | MEMBER | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7339/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7339",
"html_url": "https://github.com/huggingface/datasets/pull/7339",
"diff_url": "https://github.com/huggingface/datasets/pull/7339.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7339.patch",
"merged_at": "2024-12-17T16:46:30"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7337 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7337/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7337/comments | https://api.github.com/repos/huggingface/datasets/issues/7337/events | https://github.com/huggingface/datasets/issues/7337 | 2,744,877,569 | I_kwDODunzps6jm4IB | 7,337 | One or several metadata.jsonl were found, but not in the same directory or in a parent directory of | {
"login": "mst272",
"id": 67250532,
"node_id": "MDQ6VXNlcjY3MjUwNTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/67250532?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mst272",
"html_url": "https://github.com/mst272",
"followers_url": "https://api.github.com/users/mst272/followers",
"following_url": "https://api.github.com/users/mst272/following{/other_user}",
"gists_url": "https://api.github.com/users/mst272/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mst272/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mst272/subscriptions",
"organizations_url": "https://api.github.com/users/mst272/orgs",
"repos_url": "https://api.github.com/users/mst272/repos",
"events_url": "https://api.github.com/users/mst272/events{/privacy}",
"received_events_url": "https://api.github.com/users/mst272/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-12-17T12:58:43 | 2024-12-17T12:58:43 | null | NONE | null | ### Describe the bug
ImageFolder with metadata.jsonl error. I downloaded liuhaotian/LLaVA-CC3M-Pretrain-595K locally from Hugging Face. According to the tutorial in https://huggingface.co/docs/datasets/image_dataset#image-captioning, only put images.zip and metadata.jsonl containing information in the same folder. However, after loading, an error was reported: One or several metadata.jsonl were found, but not in the same directory or in a parent directory of.
The data in my jsonl file is as follows:
> {"id": "GCC_train_002448550", "file_name": "GCC_train_002448550.jpg", "conversations": [{"from": "human", "value": "<image>\nProvide a brief description of the given image."}, {"from": "gpt", "value": "a view of a city , where the flyover was proposed to reduce the increasing traffic on thursday ."}]}
### Steps to reproduce the bug
from datasets import load_dataset
image = load_dataset("imagefolder",data_dir='data/opensource_data')
### Expected behavior
success
### Environment info
datasets==3.2.0 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7337/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7336 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7336/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7336/comments | https://api.github.com/repos/huggingface/datasets/issues/7336/events | https://github.com/huggingface/datasets/issues/7336 | 2,744,746,456 | I_kwDODunzps6jmYHY | 7,336 | Clarify documentation or Create DatasetCard | {
"login": "August-murr",
"id": 145011209,
"node_id": "U_kgDOCKSyCQ",
"avatar_url": "https://avatars.githubusercontent.com/u/145011209?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/August-murr",
"html_url": "https://github.com/August-murr",
"followers_url": "https://api.github.com/users/August-murr/followers",
"following_url": "https://api.github.com/users/August-murr/following{/other_user}",
"gists_url": "https://api.github.com/users/August-murr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/August-murr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/August-murr/subscriptions",
"organizations_url": "https://api.github.com/users/August-murr/orgs",
"repos_url": "https://api.github.com/users/August-murr/repos",
"events_url": "https://api.github.com/users/August-murr/events{/privacy}",
"received_events_url": "https://api.github.com/users/August-murr/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2024-12-17T12:01:00 | 2024-12-17T12:01:00 | null | NONE | null | ### Feature request
I noticed that you can use a Model Card instead of a Dataset Card when pushing a dataset to the Hub, but this isn’t clearly mentioned in [the docs.](https://huggingface.co/docs/datasets/dataset_card)
- Update the docs to clarify that a Model Card can work for datasets too.
- It might be worth creating a dedicated DatasetCard module, similar to the ModelCard module, for consistency and better support.
Not sure if this belongs here or on the [Hub repo](https://github.com/huggingface/huggingface_hub), but thought I’d bring it up!
### Motivation
I just spent an hour like on [this issue](https://github.com/huggingface/trl/pull/2491) trying to create a `DatasetCard` for a script.
### Your contribution
might later | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7336/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7335 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7335/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7335/comments | https://api.github.com/repos/huggingface/datasets/issues/7335/events | https://github.com/huggingface/datasets/issues/7335 | 2,743,437,260 | I_kwDODunzps6jhYfM | 7,335 | Too many open files: '/root/.cache/huggingface/token' | {
"login": "kopyl",
"id": 17604849,
"node_id": "MDQ6VXNlcjE3NjA0ODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kopyl",
"html_url": "https://github.com/kopyl",
"followers_url": "https://api.github.com/users/kopyl/followers",
"following_url": "https://api.github.com/users/kopyl/following{/other_user}",
"gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kopyl/subscriptions",
"organizations_url": "https://api.github.com/users/kopyl/orgs",
"repos_url": "https://api.github.com/users/kopyl/repos",
"events_url": "https://api.github.com/users/kopyl/events{/privacy}",
"received_events_url": "https://api.github.com/users/kopyl/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-12-16T21:30:24 | 2024-12-16T21:30:24 | null | NONE | null | ### Describe the bug
I ran this code:
```
from datasets import load_dataset
dataset = load_dataset("common-canvas/commoncatalog-cc-by", cache_dir="/datadrive/datasets/cc", num_proc=1000)
```
And got this error.
Before it was some other file though (lie something...incomplete)
runnting
```
ulimit -n 8192
```
did not help at all.
### Steps to reproduce the bug
Run the code i sent
### Expected behavior
Should be no errors
### Environment info
linux, jupyter lab. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7335/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7335/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7334 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7334/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7334/comments | https://api.github.com/repos/huggingface/datasets/issues/7334/events | https://github.com/huggingface/datasets/issues/7334 | 2,740,266,503 | I_kwDODunzps6jVSYH | 7,334 | TypeError: Value.__init__() missing 1 required positional argument: 'dtype' | {
"login": "kakamond",
"id": 185799756,
"node_id": "U_kgDOCxMUTA",
"avatar_url": "https://avatars.githubusercontent.com/u/185799756?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kakamond",
"html_url": "https://github.com/kakamond",
"followers_url": "https://api.github.com/users/kakamond/followers",
"following_url": "https://api.github.com/users/kakamond/following{/other_user}",
"gists_url": "https://api.github.com/users/kakamond/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kakamond/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kakamond/subscriptions",
"organizations_url": "https://api.github.com/users/kakamond/orgs",
"repos_url": "https://api.github.com/users/kakamond/repos",
"events_url": "https://api.github.com/users/kakamond/events{/privacy}",
"received_events_url": "https://api.github.com/users/kakamond/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-12-15T04:08:46 | 2024-12-15T04:08:46 | null | NONE | null | ### Describe the bug
ds = load_dataset(
"./xxx.py",
name="default",
split="train",
)
The datasets does not support debugging locally anymore...
### Steps to reproduce the bug
```
from datasets import load_dataset
ds = load_dataset(
"./repo.py",
name="default",
split="train",
)
for item in ds:
print(item)
```
It works fine for "username/repo", but it does not work for "./repo.py" when debugging locally...
Running above code template will report TypeError: Value.__init__() missing 1 required positional argument: 'dtype'
### Expected behavior
fix this bug
### Environment info
python 3.10 datasets==2.21 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7334/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7334/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7328 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7328/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7328/comments | https://api.github.com/repos/huggingface/datasets/issues/7328/events | https://github.com/huggingface/datasets/pull/7328 | 2,738,626,593 | PR_kwDODunzps6FKK13 | 7,328 | Fix typo in arrow_dataset | {
"login": "AndreaFrancis",
"id": 5564745,
"node_id": "MDQ6VXNlcjU1NjQ3NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5564745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AndreaFrancis",
"html_url": "https://github.com/AndreaFrancis",
"followers_url": "https://api.github.com/users/AndreaFrancis/followers",
"following_url": "https://api.github.com/users/AndreaFrancis/following{/other_user}",
"gists_url": "https://api.github.com/users/AndreaFrancis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AndreaFrancis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AndreaFrancis/subscriptions",
"organizations_url": "https://api.github.com/users/AndreaFrancis/orgs",
"repos_url": "https://api.github.com/users/AndreaFrancis/repos",
"events_url": "https://api.github.com/users/AndreaFrancis/events{/privacy}",
"received_events_url": "https://api.github.com/users/AndreaFrancis/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-12-13T15:17:09 | 2024-12-19T17:10:27 | 2024-12-19T17:10:25 | CONTRIBUTOR | null | null | {
"login": "AndreaFrancis",
"id": 5564745,
"node_id": "MDQ6VXNlcjU1NjQ3NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5564745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AndreaFrancis",
"html_url": "https://github.com/AndreaFrancis",
"followers_url": "https://api.github.com/users/AndreaFrancis/followers",
"following_url": "https://api.github.com/users/AndreaFrancis/following{/other_user}",
"gists_url": "https://api.github.com/users/AndreaFrancis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AndreaFrancis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AndreaFrancis/subscriptions",
"organizations_url": "https://api.github.com/users/AndreaFrancis/orgs",
"repos_url": "https://api.github.com/users/AndreaFrancis/repos",
"events_url": "https://api.github.com/users/AndreaFrancis/events{/privacy}",
"received_events_url": "https://api.github.com/users/AndreaFrancis/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7328/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7328/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7328",
"html_url": "https://github.com/huggingface/datasets/pull/7328",
"diff_url": "https://github.com/huggingface/datasets/pull/7328.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7328.patch",
"merged_at": "2024-12-19T17:10:25"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7327 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7327/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7327/comments | https://api.github.com/repos/huggingface/datasets/issues/7327/events | https://github.com/huggingface/datasets/issues/7327 | 2,738,514,909 | I_kwDODunzps6jOmvd | 7,327 | .map() is not caching and ram goes OOM | {
"login": "simeneide",
"id": 7136076,
"node_id": "MDQ6VXNlcjcxMzYwNzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7136076?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simeneide",
"html_url": "https://github.com/simeneide",
"followers_url": "https://api.github.com/users/simeneide/followers",
"following_url": "https://api.github.com/users/simeneide/following{/other_user}",
"gists_url": "https://api.github.com/users/simeneide/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simeneide/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simeneide/subscriptions",
"organizations_url": "https://api.github.com/users/simeneide/orgs",
"repos_url": "https://api.github.com/users/simeneide/repos",
"events_url": "https://api.github.com/users/simeneide/events{/privacy}",
"received_events_url": "https://api.github.com/users/simeneide/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-12-13T14:22:56 | 2024-12-13T14:22:56 | null | NONE | null | ### Describe the bug
Im trying to run a fairly simple map that is converting a dataset into numpy arrays. however, it just piles up on memory and doesnt write to disk. Ive tried multiple cache techniques such as specifying the cache dir, setting max mem, +++ but none seem to work. What am I missing here?
### Steps to reproduce the bug
```
from pydub import AudioSegment
import io
import base64
import numpy as np
import os
CACHE_PATH = "/mnt/extdisk/cache" # "/root/.cache/huggingface/"#
os.environ["HF_HOME"] = CACHE_PATH
import datasets
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
# Create a handler for Jupyter notebook
handler = logging.StreamHandler()
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
#datasets.config.IN_MEMORY_MAX_SIZE= 1000#*(2**30) #50 gb
print(datasets.config.HF_CACHE_HOME)
print(datasets.config.HF_DATASETS_CACHE)
# Decode the base64 string into bytes
def convert_mp3_to_audio_segment(example):
"""
example = ds['train'][0]
"""
try:
audio_data_bytes = base64.b64decode(example['audio'])
# Use pydub to load the MP3 audio from the decoded bytes
audio_segment = AudioSegment.from_file(io.BytesIO(audio_data_bytes), format="mp3")
# Resample to 24_000
audio_segment = audio_segment.set_frame_rate(24_000)
audio = {'sampling_rate' : audio_segment.frame_rate,
'array' : np.array(audio_segment.get_array_of_samples(), dtype="float")}
del audio_segment
duration = len(audio['array']) / audio['sampling_rate']
except Exception as e:
logger.warning(f"Failed to convert audio for {example['id']}. Error: {e}")
audio = {'sampling_rate' : 0,
'array' : np.array([]), duration : 0}
return {'audio' : audio, 'duration' : duration}
ds = datasets.load_dataset("NbAiLab/nb_distil_speech_noconcat_stortinget", cache_dir=CACHE_PATH, keep_in_memory=False)
#%%
num_proc=32
ds_processed = (
ds
#.select(range(10))
.map(convert_mp3_to_audio_segment, num_proc=num_proc, desc="Converting mp3 to audio segment") #, cache_file_name=f"{CACHE_PATH}/stortinget_audio" # , cache_file_name="test"
)
```
### Expected behavior
the map should write to disk
### Environment info
- `datasets` version: 3.2.0
- Platform: Linux-6.8.0-45-generic-x86_64-with-glibc2.39
- Python version: 3.12.7
- `huggingface_hub` version: 0.26.3
- PyArrow version: 18.1.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7327/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7327/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7326 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7326/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7326/comments | https://api.github.com/repos/huggingface/datasets/issues/7326/events | https://github.com/huggingface/datasets/issues/7326 | 2,738,188,902 | I_kwDODunzps6jNXJm | 7,326 | Remove upper bound for fsspec | {
"login": "fellhorn",
"id": 26092524,
"node_id": "MDQ6VXNlcjI2MDkyNTI0",
"avatar_url": "https://avatars.githubusercontent.com/u/26092524?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fellhorn",
"html_url": "https://github.com/fellhorn",
"followers_url": "https://api.github.com/users/fellhorn/followers",
"following_url": "https://api.github.com/users/fellhorn/following{/other_user}",
"gists_url": "https://api.github.com/users/fellhorn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fellhorn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fellhorn/subscriptions",
"organizations_url": "https://api.github.com/users/fellhorn/orgs",
"repos_url": "https://api.github.com/users/fellhorn/repos",
"events_url": "https://api.github.com/users/fellhorn/events{/privacy}",
"received_events_url": "https://api.github.com/users/fellhorn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-12-13T11:35:12 | 2024-12-16T11:08:10 | null | NONE | null | ### Describe the bug
As also raised by @cyyever in https://github.com/huggingface/datasets/pull/7296 and @NeilGirdhar in https://github.com/huggingface/datasets/commit/d5468836fe94e8be1ae093397dd43d4a2503b926#commitcomment-140952162 , `datasets` has a problematic version constraint on `fsspec`.
In our case this causes (unnecessary?) troubles due to a race condition bug in that version of the corresponding `gcsfs` plugin, that causes deadlocks: https://github.com/fsspec/gcsfs/pull/643
We just use a version override to ignore the constraint from `datasets`, but imho the version constraint could just be removed in the first place?
The last few PRs bumping the upper bound were basically uneventful:
* https://github.com/huggingface/datasets/pull/7219
* https://github.com/huggingface/datasets/pull/6921
* https://github.com/huggingface/datasets/pull/6747
### Steps to reproduce the bug
-
### Expected behavior
Installing `fsspec>=2024.10.0` along `datasets` should be possible without overwriting constraints.
### Environment info
All recent datasets versions | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7326/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7326/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7325 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7325/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7325/comments | https://api.github.com/repos/huggingface/datasets/issues/7325/events | https://github.com/huggingface/datasets/pull/7325 | 2,736,618,054 | PR_kwDODunzps6FDpMp | 7,325 | Introduce pdf support (#7318) | {
"login": "yabramuvdi",
"id": 4812761,
"node_id": "MDQ6VXNlcjQ4MTI3NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4812761?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yabramuvdi",
"html_url": "https://github.com/yabramuvdi",
"followers_url": "https://api.github.com/users/yabramuvdi/followers",
"following_url": "https://api.github.com/users/yabramuvdi/following{/other_user}",
"gists_url": "https://api.github.com/users/yabramuvdi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yabramuvdi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yabramuvdi/subscriptions",
"organizations_url": "https://api.github.com/users/yabramuvdi/orgs",
"repos_url": "https://api.github.com/users/yabramuvdi/repos",
"events_url": "https://api.github.com/users/yabramuvdi/events{/privacy}",
"received_events_url": "https://api.github.com/users/yabramuvdi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-12-12T18:31:18 | 2024-12-19T17:22:51 | null | NONE | null | First implementation of the Pdf feature to support pdfs (#7318) . Using [pdfplumber](https://github.com/jsvine/pdfplumber?tab=readme-ov-file#python-library) as the default library to work with pdfs.
@lhoestq and @AndreaFrancis | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7325/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7325/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7325",
"html_url": "https://github.com/huggingface/datasets/pull/7325",
"diff_url": "https://github.com/huggingface/datasets/pull/7325.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7325.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7323 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7323/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7323/comments | https://api.github.com/repos/huggingface/datasets/issues/7323/events | https://github.com/huggingface/datasets/issues/7323 | 2,736,008,698 | I_kwDODunzps6jFC36 | 7,323 | Unexpected cache behaviour using load_dataset | {
"login": "Moritz-Wirth",
"id": 74349080,
"node_id": "MDQ6VXNlcjc0MzQ5MDgw",
"avatar_url": "https://avatars.githubusercontent.com/u/74349080?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Moritz-Wirth",
"html_url": "https://github.com/Moritz-Wirth",
"followers_url": "https://api.github.com/users/Moritz-Wirth/followers",
"following_url": "https://api.github.com/users/Moritz-Wirth/following{/other_user}",
"gists_url": "https://api.github.com/users/Moritz-Wirth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Moritz-Wirth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Moritz-Wirth/subscriptions",
"organizations_url": "https://api.github.com/users/Moritz-Wirth/orgs",
"repos_url": "https://api.github.com/users/Moritz-Wirth/repos",
"events_url": "https://api.github.com/users/Moritz-Wirth/events{/privacy}",
"received_events_url": "https://api.github.com/users/Moritz-Wirth/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-12-12T14:03:00 | 2024-12-12T14:18:17 | null | NONE | null | ### Describe the bug
Following the (Cache management)[https://huggingface.co/docs/datasets/en/cache] docu and previous behaviour from datasets version 2.18.0, one is able to change the cache directory. Previously, all downloaded/extracted/etc files were found in this folder. As i have recently update to the latest version this is not the case anymore. Downloaded files are stored in `~/.cache/huggingface/hub`.
Providing the `cache_dir` argument in `load_dataset` the cache directory is created and there are some files but the bulk is still in `~/.cache/huggingface/hub`.
I believe this could be solved by adding the cache_dir argument [here](https://github.com/huggingface/datasets/blob/fdda5585ab18ea1292547f36c969d12c408ab842/src/datasets/utils/file_utils.py#L188)
### Steps to reproduce the bug
For example using https://huggingface.co/datasets/ashraq/esc50:
```python
from datasets import load_dataset
ds = load_dataset("ashraq/esc50", "default", cache_dir="~/custom/cache/path/esc50")
```
### Expected behavior
I would expect the bulk of files related to the dataset to be stored somewhere in `~/custom/cache/path/esc50`, but it seems they are in `~/.cache/huggingface/hub/datasets--ashraq--esc50`.
### Environment info
- `datasets` version: 3.2.0
- Platform: Linux-5.14.0-503.15.1.el9_5.x86_64-x86_64-with-glibc2.34
- Python version: 3.10.14
- `huggingface_hub` version: 0.26.5
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.6.1 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7323/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7323/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7322 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7322/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7322/comments | https://api.github.com/repos/huggingface/datasets/issues/7322/events | https://github.com/huggingface/datasets/issues/7322 | 2,732,254,868 | I_kwDODunzps6i2uaU | 7,322 | ArrowInvalid: JSON parse error: Column() changed from object to array in row 0 | {
"login": "CLL112",
"id": 41767521,
"node_id": "MDQ6VXNlcjQxNzY3NTIx",
"avatar_url": "https://avatars.githubusercontent.com/u/41767521?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CLL112",
"html_url": "https://github.com/CLL112",
"followers_url": "https://api.github.com/users/CLL112/followers",
"following_url": "https://api.github.com/users/CLL112/following{/other_user}",
"gists_url": "https://api.github.com/users/CLL112/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CLL112/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CLL112/subscriptions",
"organizations_url": "https://api.github.com/users/CLL112/orgs",
"repos_url": "https://api.github.com/users/CLL112/repos",
"events_url": "https://api.github.com/users/CLL112/events{/privacy}",
"received_events_url": "https://api.github.com/users/CLL112/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-12-11T08:41:39 | 2024-12-11T08:42:54 | null | NONE | null | ### Describe the bug
Encountering an error while loading the ```liuhaotian/LLaVA-Instruct-150K dataset```.
### Steps to reproduce the bug
```
from datasets import load_dataset
fw =load_dataset("liuhaotian/LLaVA-Instruct-150K")
```
Error:
```
ArrowInvalid Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/json/json.py](https://localhost:8080/#) in _generate_tables(self, files)
136 try:
--> 137 pa_table = paj.read_json(
138 io.BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size)
20 frames
ArrowInvalid: JSON parse error: Column() changed from object to array in row 0
During handling of the above exception, another exception occurred:
ArrowTypeError Traceback (most recent call last)
ArrowTypeError: ("Expected bytes, got a 'int' object", 'Conversion failed for column id with type object')
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1895 if isinstance(e, DatasetGenerationError):
1896 raise
-> 1897 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1898
1899 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
```
### Expected behavior
I have tried loading the dataset both on my own server and on Colab, and encountered errors in both instances.
### Environment info
```
- `datasets` version: 3.2.0
- Platform: Linux-6.1.85+-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.26.3
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.9.0
```
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7322/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7322/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7321 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7321/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7321/comments | https://api.github.com/repos/huggingface/datasets/issues/7321/events | https://github.com/huggingface/datasets/issues/7321 | 2,731,626,760 | I_kwDODunzps6i0VEI | 7,321 | ImportError: cannot import name 'set_caching_enabled' from 'datasets' | {
"login": "sankexin",
"id": 33318353,
"node_id": "MDQ6VXNlcjMzMzE4MzUz",
"avatar_url": "https://avatars.githubusercontent.com/u/33318353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sankexin",
"html_url": "https://github.com/sankexin",
"followers_url": "https://api.github.com/users/sankexin/followers",
"following_url": "https://api.github.com/users/sankexin/following{/other_user}",
"gists_url": "https://api.github.com/users/sankexin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sankexin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sankexin/subscriptions",
"organizations_url": "https://api.github.com/users/sankexin/orgs",
"repos_url": "https://api.github.com/users/sankexin/repos",
"events_url": "https://api.github.com/users/sankexin/events{/privacy}",
"received_events_url": "https://api.github.com/users/sankexin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-12-11T01:58:46 | 2024-12-11T13:32:15 | null | NONE | null | ### Describe the bug
Traceback (most recent call last):
File "/usr/local/lib/python3.10/runpy.py", line 187, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "/usr/local/lib/python3.10/runpy.py", line 110, in _get_module_details
__import__(pkg_name)
File "/home/Medusa/axolotl/src/axolotl/cli/__init__.py", line 23, in <module>
from axolotl.train import TrainDatasetMeta
File "/home/Medusa/axolotl/src/axolotl/train.py", line 23, in <module>
from axolotl.utils.trainer import setup_trainer
File "/home/Medusa/axolotl/src/axolotl/utils/trainer.py", line 13, in <module>
from datasets import set_caching_enabled
ImportError: cannot import name 'set_caching_enabled' from 'datasets' (/usr/local/lib/python3.10/site-packages/datasets/__init__.py)
### Steps to reproduce the bug
1、axolotl
2、accelerate launch -m axolotl.cli.train examples/medusa/qwen_lora_stage1.yml
### Expected behavior
enable datasets
### Environment info
python3.10 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7321/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7321/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7320 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7320/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7320/comments | https://api.github.com/repos/huggingface/datasets/issues/7320/events | https://github.com/huggingface/datasets/issues/7320 | 2,731,112,100 | I_kwDODunzps6iyXak | 7,320 | ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['label'] | {
"login": "atrompeterog",
"id": 38381084,
"node_id": "MDQ6VXNlcjM4MzgxMDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/38381084?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/atrompeterog",
"html_url": "https://github.com/atrompeterog",
"followers_url": "https://api.github.com/users/atrompeterog/followers",
"following_url": "https://api.github.com/users/atrompeterog/following{/other_user}",
"gists_url": "https://api.github.com/users/atrompeterog/gists{/gist_id}",
"starred_url": "https://api.github.com/users/atrompeterog/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/atrompeterog/subscriptions",
"organizations_url": "https://api.github.com/users/atrompeterog/orgs",
"repos_url": "https://api.github.com/users/atrompeterog/repos",
"events_url": "https://api.github.com/users/atrompeterog/events{/privacy}",
"received_events_url": "https://api.github.com/users/atrompeterog/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-12-10T20:23:11 | 2024-12-10T23:22:23 | 2024-12-10T23:22:23 | NONE | null | ### Describe the bug
I am trying to create a PEFT model from DISTILBERT model, and run a training loop. However, the trainer.train() is giving me this error: ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['label']
Here is my code:
### Steps to reproduce the bug
#Creating a PEFT Config
from peft import LoraConfig
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from peft import get_peft_model
lora_config = LoraConfig(
task_type="SEQ_CLASS",
r=8,
lora_alpha=32,
target_modules=["q_lin", "k_lin", "v_lin"],
lora_dropout=0.01,
)
#Converting a Transformers Model into a PEFT Model
model = AutoModelForSequenceClassification.from_pretrained(
"distilbert-base-uncased",
num_labels=2, #Binary classification, 1 = positive, 0 = negative
)
lora_model = get_peft_model(model, lora_config)
print(lora_model)
Tokenize data set
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
# Load the train and test splits dataset
dataset = load_dataset("fancyzhx/amazon_polarity")
#create a smaller subset for train and test
subset_size = 5000
small_train_dataset = dataset["train"].shuffle(seed=42).select(range(subset_size))
small_test_dataset = dataset["test"].shuffle(seed=42).select(range(subset_size))
#Tokenize data
def tokenize_function(example):
return tokenizer(example["content"], padding="max_length", truncation=True)
tokenized_train_dataset = small_train_dataset.map(tokenize_function, batched=True)
tokenized_test_dataset = small_test_dataset.map(tokenize_function, batched=True)
train_lora = tokenized_train_dataset.rename_column('label', 'labels')
test_lora = tokenized_test_dataset.rename_column('label', 'labels')
print(tokenized_train_dataset.column_names)
print(tokenized_test_dataset.column_names)
#Train the PEFT model
import numpy as np
from transformers import Trainer, TrainingArguments, default_data_collator, DataCollatorWithPadding
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForSequenceClassification
def compute_metrics(eval_pred):
predictions, labels = eval_pred
predictions = np.argmax(predictions, axis=1)
return {"accuracy": (predictions == labels).mean()}
trainer = Trainer(
model=lora_model,
args=TrainingArguments(
output_dir=".",
learning_rate=2e-3,
# Reduce the batch size if you don't have enough memory
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
num_train_epochs=3,
weight_decay=0.01,
evaluation_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,
),
train_dataset=tokenized_train_dataset,
eval_dataset=tokenized_test_dataset,
tokenizer=tokenizer,
data_collator=DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="pt"),
compute_metrics=compute_metrics,
)
trainer.train()
### Expected behavior
Example of output:
[558/558 01:04, Epoch XX]
Epoch | Training Loss | Validation Loss | Accuracy
-- | -- | -- | --
1 | No log | 0.046478 | 0.988341
2 | 0.052800 | 0.048840 | 0.988341
### Environment info
Using python and jupyter notbook | {
"login": "atrompeterog",
"id": 38381084,
"node_id": "MDQ6VXNlcjM4MzgxMDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/38381084?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/atrompeterog",
"html_url": "https://github.com/atrompeterog",
"followers_url": "https://api.github.com/users/atrompeterog/followers",
"following_url": "https://api.github.com/users/atrompeterog/following{/other_user}",
"gists_url": "https://api.github.com/users/atrompeterog/gists{/gist_id}",
"starred_url": "https://api.github.com/users/atrompeterog/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/atrompeterog/subscriptions",
"organizations_url": "https://api.github.com/users/atrompeterog/orgs",
"repos_url": "https://api.github.com/users/atrompeterog/repos",
"events_url": "https://api.github.com/users/atrompeterog/events{/privacy}",
"received_events_url": "https://api.github.com/users/atrompeterog/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7320/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7320/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7319 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7319/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7319/comments | https://api.github.com/repos/huggingface/datasets/issues/7319/events | https://github.com/huggingface/datasets/pull/7319 | 2,730,679,980 | PR_kwDODunzps6EvHBp | 7,319 | set dev version | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-12-10T17:01:34 | 2024-12-10T17:04:04 | 2024-12-10T17:01:45 | MEMBER | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7319/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7319/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7319",
"html_url": "https://github.com/huggingface/datasets/pull/7319",
"diff_url": "https://github.com/huggingface/datasets/pull/7319.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7319.patch",
"merged_at": "2024-12-10T17:01:45"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7318 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7318/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7318/comments | https://api.github.com/repos/huggingface/datasets/issues/7318/events | https://github.com/huggingface/datasets/issues/7318 | 2,730,676,278 | I_kwDODunzps6iwtA2 | 7,318 | Introduce support for PDFs | {
"login": "yabramuvdi",
"id": 4812761,
"node_id": "MDQ6VXNlcjQ4MTI3NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4812761?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yabramuvdi",
"html_url": "https://github.com/yabramuvdi",
"followers_url": "https://api.github.com/users/yabramuvdi/followers",
"following_url": "https://api.github.com/users/yabramuvdi/following{/other_user}",
"gists_url": "https://api.github.com/users/yabramuvdi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yabramuvdi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yabramuvdi/subscriptions",
"organizations_url": "https://api.github.com/users/yabramuvdi/orgs",
"repos_url": "https://api.github.com/users/yabramuvdi/repos",
"events_url": "https://api.github.com/users/yabramuvdi/events{/privacy}",
"received_events_url": "https://api.github.com/users/yabramuvdi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 6 | 2024-12-10T16:59:48 | 2024-12-12T18:38:13 | null | NONE | null | ### Feature request
The idea (discussed in the Discord server with @lhoestq ) is to have a Pdf type like Image/Audio/Video. For example [Video](https://github.com/huggingface/datasets/blob/main/src/datasets/features/video.py) was recently added and contains how to decode a video file encoded in a dictionary like {"path": ..., "bytes": ...} as a VideoReader using decord. We want to do the same with pdf and get a [pypdfium2.PdfDocument](https://pypdfium2.readthedocs.io/en/stable/_modules/pypdfium2/_helpers/document.html#PdfDocument).
### Motivation
In many cases PDFs contain very valuable information beyond text (e.g. images, figures). Support for PDFs would help create datasets where all the information is preserved.
### Your contribution
I can start the implementation of the Pdf type :) | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7318/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7318/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7317 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7317/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7317/comments | https://api.github.com/repos/huggingface/datasets/issues/7317/events | https://github.com/huggingface/datasets/pull/7317 | 2,730,661,237 | PR_kwDODunzps6EvC5Q | 7,317 | Release: 3.2.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-12-10T16:53:20 | 2024-12-10T16:56:58 | 2024-12-10T16:56:56 | MEMBER | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7317/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7317/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7317",
"html_url": "https://github.com/huggingface/datasets/pull/7317",
"diff_url": "https://github.com/huggingface/datasets/pull/7317.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7317.patch",
"merged_at": "2024-12-10T16:56:56"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7316 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7316/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7316/comments | https://api.github.com/repos/huggingface/datasets/issues/7316/events | https://github.com/huggingface/datasets/pull/7316 | 2,730,196,085 | PR_kwDODunzps6Etc0U | 7,316 | More docs to from_dict to mention that the result lives in RAM | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-12-10T13:56:01 | 2024-12-10T13:58:32 | 2024-12-10T13:57:02 | MEMBER | null | following discussions at https://discuss.huggingface.co/t/how-to-load-this-simple-audio-data-set-and-use-dataset-map-without-memory-issues/17722/14 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7316/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7316/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7316",
"html_url": "https://github.com/huggingface/datasets/pull/7316",
"diff_url": "https://github.com/huggingface/datasets/pull/7316.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7316.patch",
"merged_at": "2024-12-10T13:57:02"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7314 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7314/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7314/comments | https://api.github.com/repos/huggingface/datasets/issues/7314/events | https://github.com/huggingface/datasets/pull/7314 | 2,727,502,630 | PR_kwDODunzps6EkCi5 | 7,314 | Resolved for empty datafiles | {
"login": "sahillihas",
"id": 20582290,
"node_id": "MDQ6VXNlcjIwNTgyMjkw",
"avatar_url": "https://avatars.githubusercontent.com/u/20582290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sahillihas",
"html_url": "https://github.com/sahillihas",
"followers_url": "https://api.github.com/users/sahillihas/followers",
"following_url": "https://api.github.com/users/sahillihas/following{/other_user}",
"gists_url": "https://api.github.com/users/sahillihas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sahillihas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sahillihas/subscriptions",
"organizations_url": "https://api.github.com/users/sahillihas/orgs",
"repos_url": "https://api.github.com/users/sahillihas/repos",
"events_url": "https://api.github.com/users/sahillihas/events{/privacy}",
"received_events_url": "https://api.github.com/users/sahillihas/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-12-09T15:47:22 | 2024-12-27T18:20:21 | null | NONE | null | Resolved for Issue#6152 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7314/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7314/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7314",
"html_url": "https://github.com/huggingface/datasets/pull/7314",
"diff_url": "https://github.com/huggingface/datasets/pull/7314.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7314.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7313 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7313/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7313/comments | https://api.github.com/repos/huggingface/datasets/issues/7313/events | https://github.com/huggingface/datasets/issues/7313 | 2,726,240,634 | I_kwDODunzps6ifyF6 | 7,313 | Cannot create a dataset with relative audio path | {
"login": "sedol1339",
"id": 5188731,
"node_id": "MDQ6VXNlcjUxODg3MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5188731?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sedol1339",
"html_url": "https://github.com/sedol1339",
"followers_url": "https://api.github.com/users/sedol1339/followers",
"following_url": "https://api.github.com/users/sedol1339/following{/other_user}",
"gists_url": "https://api.github.com/users/sedol1339/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sedol1339/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sedol1339/subscriptions",
"organizations_url": "https://api.github.com/users/sedol1339/orgs",
"repos_url": "https://api.github.com/users/sedol1339/repos",
"events_url": "https://api.github.com/users/sedol1339/events{/privacy}",
"received_events_url": "https://api.github.com/users/sedol1339/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2024-12-09T07:34:20 | 2024-12-12T13:46:38 | null | NONE | null | ### Describe the bug
Hello! I want to create a dataset of parquet files, with audios stored as separate .mp3 files. However, it says "No such file or directory" (see the reproducing code).
### Steps to reproduce the bug
Creating a dataset
```
from pathlib import Path
from datasets import Dataset, load_dataset, Audio
Path('my_dataset/audio').mkdir(parents=True, exist_ok=True)
Path('my_dataset/audio/file.mp3').touch(exist_ok=True)
Dataset.from_list(
[{'audio': {'path': 'audio/file.mp3'}}]
).to_parquet('my_dataset/data.parquet')
```
Result:
```
# my_dataset
# ├── audio
# │ └── file.mp3
# └── data.parquet
```
Trying to load the dataset
```
dataset = (
load_dataset('my_dataset', split='train')
.cast_column('audio', Audio(sampling_rate=16_000))
)
dataset[0]
>>> FileNotFoundError: [Errno 2] No such file or directory: 'audio/file.mp3'
```
### Expected behavior
I expect the dataset to load correctly.
I've found 2 workarounds, but they are not very good:
1. I can specify an absolute path to the audio, however, when I move the folder or upload to HF it will stop working.
2. I can set `'path': 'file.mp3'`, and load with `load_dataset('my_dataset', data_dir='audio')` - it seems to work, but does this mean that anyone from Hugging Face who wants to use this dataset should also pass the `data_dir` argument, otherwise it won't work?
### Environment info
datasets 3.1.0, Ubuntu 24.04.1 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7313/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7313/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7312 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7312/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7312/comments | https://api.github.com/repos/huggingface/datasets/issues/7312/events | https://github.com/huggingface/datasets/pull/7312 | 2,725,103,094 | PR_kwDODunzps6EbwNN | 7,312 | [Audio Features - DO NOT MERGE] PoC for adding an offset+sliced reading to audio file. | {
"login": "TParcollet",
"id": 11910731,
"node_id": "MDQ6VXNlcjExOTEwNzMx",
"avatar_url": "https://avatars.githubusercontent.com/u/11910731?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TParcollet",
"html_url": "https://github.com/TParcollet",
"followers_url": "https://api.github.com/users/TParcollet/followers",
"following_url": "https://api.github.com/users/TParcollet/following{/other_user}",
"gists_url": "https://api.github.com/users/TParcollet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TParcollet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TParcollet/subscriptions",
"organizations_url": "https://api.github.com/users/TParcollet/orgs",
"repos_url": "https://api.github.com/users/TParcollet/repos",
"events_url": "https://api.github.com/users/TParcollet/events{/privacy}",
"received_events_url": "https://api.github.com/users/TParcollet/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-12-08T10:27:31 | 2024-12-08T10:27:31 | null | NONE | null | This is a proof of concept for #7310 . The idea is to enable the access to others column of the dataset row when loading an audio file into a table. This is to allow sliced reading. As stated in the issue, many people have very long audio files and use start and stop slicing in this audio file.
Right now, this code work as a PoC on my dataset. However, this is **just to illustrate** the idea. Many things are messed up, the first being that the shards have wildly varying sizes.
Could be of interest to @lhoestq and @sanchit-gandhi ?
Happy to test better ideas locally. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7312/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7312/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7312",
"html_url": "https://github.com/huggingface/datasets/pull/7312",
"diff_url": "https://github.com/huggingface/datasets/pull/7312.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7312.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7311 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7311/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7311/comments | https://api.github.com/repos/huggingface/datasets/issues/7311/events | https://github.com/huggingface/datasets/issues/7311 | 2,725,002,630 | I_kwDODunzps6ibD2G | 7,311 | How to get the original dataset name with username? | {
"login": "npuichigo",
"id": 11533479,
"node_id": "MDQ6VXNlcjExNTMzNDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/npuichigo",
"html_url": "https://github.com/npuichigo",
"followers_url": "https://api.github.com/users/npuichigo/followers",
"following_url": "https://api.github.com/users/npuichigo/following{/other_user}",
"gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions",
"organizations_url": "https://api.github.com/users/npuichigo/orgs",
"repos_url": "https://api.github.com/users/npuichigo/repos",
"events_url": "https://api.github.com/users/npuichigo/events{/privacy}",
"received_events_url": "https://api.github.com/users/npuichigo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2024-12-08T07:18:14 | 2024-12-08T07:19:41 | null | CONTRIBUTOR | null | ### Feature request
The issue is related to ray data https://github.com/ray-project/ray/issues/49008 which it requires to check if the dataset is the original one just after `load_dataset` and parquet files are already available on hf hub.
The solution used now is to get the dataset name, config and split, then `load_dataset` again and check the fingerprint. But it's unable to get the correct dataset name if it contains username. So how to get the dataset name with username prefix, or is there another way to query if a dataset is the original one with parquet available?
@lhoestq
### Motivation
https://github.com/ray-project/ray/issues/49008
### Your contribution
Would like to fix that. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7311/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7311/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7310 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7310/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7310/comments | https://api.github.com/repos/huggingface/datasets/issues/7310/events | https://github.com/huggingface/datasets/issues/7310 | 2,724,830,603 | I_kwDODunzps6iaZ2L | 7,310 | Enable the Audio Feature to decode / read with an offset + duration | {
"login": "TParcollet",
"id": 11910731,
"node_id": "MDQ6VXNlcjExOTEwNzMx",
"avatar_url": "https://avatars.githubusercontent.com/u/11910731?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TParcollet",
"html_url": "https://github.com/TParcollet",
"followers_url": "https://api.github.com/users/TParcollet/followers",
"following_url": "https://api.github.com/users/TParcollet/following{/other_user}",
"gists_url": "https://api.github.com/users/TParcollet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TParcollet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TParcollet/subscriptions",
"organizations_url": "https://api.github.com/users/TParcollet/orgs",
"repos_url": "https://api.github.com/users/TParcollet/repos",
"events_url": "https://api.github.com/users/TParcollet/events{/privacy}",
"received_events_url": "https://api.github.com/users/TParcollet/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 2 | 2024-12-07T22:01:44 | 2024-12-09T21:09:46 | null | NONE | null | ### Feature request
For most large speech dataset, we do not wish to generate hundreds of millions of small audio samples. Instead, it is quite common to provide larger audio files with frame offset (soundfile start and stop arguments). We should be able to pass these arguments to Audio() (column ID corresponding in the dataset row).
### Motivation
I am currently generating a fairly big dataset to .parquet(). Unfortunately, it does not work because all existing functions load the whole .wav file corresponding to the row. All my attempts at bypassing this failed. We should be able to put in the Table only the bytes corresponding to what soundfile reads with an offset (and subset of the audio file).
### Your contribution
I can totally test whatever code on my large dataset creation script. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7310/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7310/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7315 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7315/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7315/comments | https://api.github.com/repos/huggingface/datasets/issues/7315/events | https://github.com/huggingface/datasets/issues/7315 | 2,729,738,963 | I_kwDODunzps6itILT | 7,315 | Allow manual configuration of Dataset Viewer for datasets not created with the `datasets` library | {
"login": "diarray-hub",
"id": 114512099,
"node_id": "U_kgDOBtNQ4w",
"avatar_url": "https://avatars.githubusercontent.com/u/114512099?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/diarray-hub",
"html_url": "https://github.com/diarray-hub",
"followers_url": "https://api.github.com/users/diarray-hub/followers",
"following_url": "https://api.github.com/users/diarray-hub/following{/other_user}",
"gists_url": "https://api.github.com/users/diarray-hub/gists{/gist_id}",
"starred_url": "https://api.github.com/users/diarray-hub/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/diarray-hub/subscriptions",
"organizations_url": "https://api.github.com/users/diarray-hub/orgs",
"repos_url": "https://api.github.com/users/diarray-hub/repos",
"events_url": "https://api.github.com/users/diarray-hub/events{/privacy}",
"received_events_url": "https://api.github.com/users/diarray-hub/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 13 | 2024-12-07T16:37:12 | 2024-12-11T11:05:22 | null | NONE | null | #### **Problem Description**
Currently, the Hugging Face Dataset Viewer automatically interprets dataset fields for datasets created with the `datasets` library. However, for datasets pushed directly via `git`, the Viewer:
- Defaults to generic columns like `label` with `null` values if no explicit mapping is provided.
- Does not allow dataset creators to configure field mappings or suppress default fields unless the dataset is recreated and pushed using the `datasets` library.
This creates a limitation for creators who:
- Use custom workflows to prepare datasets (e.g., manifest files with audio-transcription mappings).
- Push large datasets directly via `git` and cannot easily restructure them to conform to the `datasets` library format.
#### **Proposed Solution**
Introduce a feature that allows dataset creators to manually configure the Dataset Viewer behavior for datasets not created with the `datasets` library. This could be achieved by:
1. **Using the YAML Metadata in `README.md`:**
- Add support for defining the dataset's field mappings directly in the `README.md` YAML section.
- Example:
```yaml
viewer:
fields:
- name: "audio"
type: "audio_path" / "text"
source: "manifest['audio']"
- name: "bambara_transcription"
type: "text"
source: "manifest['bambara']"
- name: "french_translation"
type: "text"
source: "manifest['french']"
```
With manifest being a csv or json like format file in the repository so that the viewer understands that it should look for the values of each field in that file.
#### **Benefits**
- Improves flexibility for dataset creators who push datasets via `git`.
- Enhances dataset discoverability and usability on the Hugging Face Hub by allowing creators to present meaningful field mappings without restructuring their data.
- Reduces overhead for creators of large or complex datasets.
#### **Examples of Use Case**
- An audio dataset with transcriptions in multiple languages stored in a `manifest.json` file, where the user wants the Viewer to:
- Display the `audio` column and Explicitly map features that he defined such as `bambara_transcription` and `french_translation` from the manifest. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7315/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7315/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7309 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7309/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7309/comments | https://api.github.com/repos/huggingface/datasets/issues/7309/events | https://github.com/huggingface/datasets/pull/7309 | 2,723,636,931 | PR_kwDODunzps6EW77b | 7,309 | Faster parquet streaming + filters with predicate pushdown | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-12-06T18:01:54 | 2024-12-07T23:32:30 | 2024-12-07T23:32:28 | MEMBER | null | ParquetFragment.to_batches uses a buffered stream to read parquet data, which makes streaming faster (x2 on my laptop).
I also added the `filters` config parameter to support filtering with predicate pushdown, e.g.
```python
from datasets import load_dataset
filters = [('problem_source', '==', 'math')]
ds = load_dataset("nvidia/OpenMathInstruct-2", streaming=True, filters=filters)
first_example = next(iter(ds["train"]))
print(first_example["problem_source"])
# 'math'
```
cc @allisonwang-db this is a nice plus for usage in spark | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7309/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7309/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7309",
"html_url": "https://github.com/huggingface/datasets/pull/7309",
"diff_url": "https://github.com/huggingface/datasets/pull/7309.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7309.patch",
"merged_at": "2024-12-07T23:32:28"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7307 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7307/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7307/comments | https://api.github.com/repos/huggingface/datasets/issues/7307/events | https://github.com/huggingface/datasets/pull/7307 | 2,720,244,889 | PR_kwDODunzps6ELKcR | 7,307 | refactor: remove unnecessary else | {
"login": "HarikrishnanBalagopal",
"id": 20921177,
"node_id": "MDQ6VXNlcjIwOTIxMTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/20921177?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HarikrishnanBalagopal",
"html_url": "https://github.com/HarikrishnanBalagopal",
"followers_url": "https://api.github.com/users/HarikrishnanBalagopal/followers",
"following_url": "https://api.github.com/users/HarikrishnanBalagopal/following{/other_user}",
"gists_url": "https://api.github.com/users/HarikrishnanBalagopal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HarikrishnanBalagopal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HarikrishnanBalagopal/subscriptions",
"organizations_url": "https://api.github.com/users/HarikrishnanBalagopal/orgs",
"repos_url": "https://api.github.com/users/HarikrishnanBalagopal/repos",
"events_url": "https://api.github.com/users/HarikrishnanBalagopal/events{/privacy}",
"received_events_url": "https://api.github.com/users/HarikrishnanBalagopal/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-12-05T12:11:09 | 2024-12-06T15:11:33 | null | NONE | null | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7307/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7307/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7307",
"html_url": "https://github.com/huggingface/datasets/pull/7307",
"diff_url": "https://github.com/huggingface/datasets/pull/7307.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7307.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7306 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7306/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7306/comments | https://api.github.com/repos/huggingface/datasets/issues/7306/events | https://github.com/huggingface/datasets/issues/7306 | 2,719,807,464 | I_kwDODunzps6iHPfo | 7,306 | Creating new dataset from list loses information. (Audio Information Lost - either Datatype or Values). | {
"login": "ai-nikolai",
"id": 9797804,
"node_id": "MDQ6VXNlcjk3OTc4MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9797804?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ai-nikolai",
"html_url": "https://github.com/ai-nikolai",
"followers_url": "https://api.github.com/users/ai-nikolai/followers",
"following_url": "https://api.github.com/users/ai-nikolai/following{/other_user}",
"gists_url": "https://api.github.com/users/ai-nikolai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ai-nikolai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ai-nikolai/subscriptions",
"organizations_url": "https://api.github.com/users/ai-nikolai/orgs",
"repos_url": "https://api.github.com/users/ai-nikolai/repos",
"events_url": "https://api.github.com/users/ai-nikolai/events{/privacy}",
"received_events_url": "https://api.github.com/users/ai-nikolai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-12-05T09:07:53 | 2024-12-05T09:09:38 | null | NONE | null | ### Describe the bug
When creating a dataset from a list of datapoints, information is lost of the individual items.
Specifically, when creating a dataset from a list of datapoints (from another dataset). Either the datatype is lost or the values are lost. See examples below.
-> What is the best way to create a dataset from a list of datapoints?
---
e.g.:
**When running this code:**
```python
from datasets import load_dataset, Dataset
commonvoice_data = load_dataset("mozilla-foundation/common_voice_17_0", "it", split="test", streaming=True)
datapoint = next(iter(commonvoice_data))
out = [datapoint]
new_data = Dataset.from_list(out) #this loses datatype information
new_data2= Dataset.from_list(out,features=commonvoice_data.features) #this loses value information
```
**We get the following**:
---
1. `datapoint`: (the original datapoint)
```
'audio': {'path': 'it_test_0/common_voice_it_23606167.mp3', 'array': array([0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
2.21619011e-05, 2.72628222e-05, 0.00000000e+00]), 'sampling_rate': 48000}
```
Original Dataset Features:
```
>>> commonvoice_data.features
'audio': Audio(sampling_rate=48000, mono=True, decode=True, id=None)
```
- Here we see column "audio", has the proper values (both `path` & and `array`) and has the correct datatype (Audio).
----
2. new_data[0]:
```
# Cannot be printed (as it prints the entire array).
```
New Dataset 1 Features:
```
>>> new_data.features
'audio': {'array': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None), 'path': Value(dtype='string', id=None), 'sampling_rate': Value(dtype='int64', id=None)}
```
- Here we see that the column "audio", has the correct values, but is not the Audio datatype anymore.
---
3. new_data2[0]:
```
'audio': {'path': None, 'array': array([0., 0., 0., ..., 0., 0., 0.]), 'sampling_rate': 48000},
```
New Dataset 2 Features:
```
>>> new_data2.features
'audio': Audio(sampling_rate=48000, mono=True, decode=True, id=None),
```
- Here we see that the column "audio", has the correct datatype, but all the array & path values were lost!
### Steps to reproduce the bug
## Run:
```python
from datasets import load_dataset, Dataset
commonvoice_data = load_dataset("mozilla-foundation/common_voice_17_0", "it", split="test", streaming=True)
datapoint = next(iter(commonvoice_data))
out = [datapoint]
new_data = Dataset.from_list(out) #this loses datatype information
new_data2= Dataset.from_list(out,features=commonvoice_data.features) #this loses value information
```
### Expected behavior
## Expected:
```datapoint == new_data[0]```
AND
```datapoint == new_data2[0]```
### Environment info
- `datasets` version: 3.1.0
- Platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.26.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.2
- `fsspec` version: 2024.3.1 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7306/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7306/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7305 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7305/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7305/comments | https://api.github.com/repos/huggingface/datasets/issues/7305/events | https://github.com/huggingface/datasets/issues/7305 | 2,715,907,267 | I_kwDODunzps6h4XTD | 7,305 | Build Documentation Test Fails Due to "Bad Credentials" Error | {
"login": "ruidazeng",
"id": 31152346,
"node_id": "MDQ6VXNlcjMxMTUyMzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/31152346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ruidazeng",
"html_url": "https://github.com/ruidazeng",
"followers_url": "https://api.github.com/users/ruidazeng/followers",
"following_url": "https://api.github.com/users/ruidazeng/following{/other_user}",
"gists_url": "https://api.github.com/users/ruidazeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ruidazeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ruidazeng/subscriptions",
"organizations_url": "https://api.github.com/users/ruidazeng/orgs",
"repos_url": "https://api.github.com/users/ruidazeng/repos",
"events_url": "https://api.github.com/users/ruidazeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/ruidazeng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-12-03T20:22:54 | 2024-12-03T20:22:54 | null | CONTRIBUTOR | null | ### Describe the bug
The `Build documentation / build / build_main_documentation (push)` job is consistently failing during the "Syncing repository" step. The error occurs when attempting to determine the default branch name, resulting in "Bad credentials" errors.
### Steps to reproduce the bug
1. Trigger the `build_main_documentation` job.
2. Observe the logs during the "Syncing repository" step.
### Expected behavior
The workflow should be able to retrieve the default branch name without encountering credential issues.
### Environment info
```plaintext
Syncing repository: huggingface/notebooks
Getting Git version info
Temporarily overriding HOME='/home/runner/work/_temp/00e62748-9940-4a4f-bbbc-eb2cda6d7ed6' before making global git config changes
Adding repository directory to the temporary git global config as a safe directory
/usr/bin/git config --global --add safe.directory /home/runner/work/datasets/datasets/notebooks
Initializing the repository
Disabling automatic garbage collection
Setting up auth
Determining the default branch
Retrieving the default branch name
Bad credentials - https://docs.github.com/rest
Waiting 20 seconds before trying again
Retrieving the default branch name
Bad credentials - https://docs.github.com/rest
Waiting 19 seconds before trying again
Retrieving the default branch name
Error: Bad credentials - https://docs.github.com/rest
``` | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7305/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7305/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7304 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7304/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7304/comments | https://api.github.com/repos/huggingface/datasets/issues/7304/events | https://github.com/huggingface/datasets/pull/7304 | 2,715,179,811 | PR_kwDODunzps6D5saw | 7,304 | Update iterable_dataset.py | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-12-03T14:25:42 | 2024-12-03T14:28:10 | 2024-12-03T14:27:02 | MEMBER | null | close https://github.com/huggingface/datasets/issues/7297 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7304/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7304/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7304",
"html_url": "https://github.com/huggingface/datasets/pull/7304",
"diff_url": "https://github.com/huggingface/datasets/pull/7304.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7304.patch",
"merged_at": "2024-12-03T14:27:02"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7303 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7303/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7303/comments | https://api.github.com/repos/huggingface/datasets/issues/7303/events | https://github.com/huggingface/datasets/issues/7303 | 2,705,729,696 | I_kwDODunzps6hRiig | 7,303 | DataFilesNotFoundError for datasets LM1B | {
"login": "hml1996-fight",
"id": 72264324,
"node_id": "MDQ6VXNlcjcyMjY0MzI0",
"avatar_url": "https://avatars.githubusercontent.com/u/72264324?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hml1996-fight",
"html_url": "https://github.com/hml1996-fight",
"followers_url": "https://api.github.com/users/hml1996-fight/followers",
"following_url": "https://api.github.com/users/hml1996-fight/following{/other_user}",
"gists_url": "https://api.github.com/users/hml1996-fight/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hml1996-fight/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hml1996-fight/subscriptions",
"organizations_url": "https://api.github.com/users/hml1996-fight/orgs",
"repos_url": "https://api.github.com/users/hml1996-fight/repos",
"events_url": "https://api.github.com/users/hml1996-fight/events{/privacy}",
"received_events_url": "https://api.github.com/users/hml1996-fight/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-11-29T17:27:45 | 2024-12-11T13:22:47 | 2024-12-11T13:22:47 | NONE | null | ### Describe the bug
Cannot load the dataset https://huggingface.co/datasets/billion-word-benchmark/lm1b
### Steps to reproduce the bug
`dataset = datasets.load_dataset('lm1b', split=split)`
### Expected behavior
`Traceback (most recent call last):
File "/home/hml/projects/DeepLearning/Generative_model/Diffusion-BERT/word_freq.py", line 13, in <module>
train_data = DiffusionLoader(tokenizer=tokenizer).my_load(task_name='lm1b', splits=['train'])[0]
File "/home/hml/projects/DeepLearning/Generative_model/Diffusion-BERT/dataloader.py", line 20, in my_load
return [self._load(task_name, name) for name in splits]
File "/home/hml/projects/DeepLearning/Generative_model/Diffusion-BERT/dataloader.py", line 20, in <listcomp>
return [self._load(task_name, name) for name in splits]
File "/home/hml/projects/DeepLearning/Generative_model/Diffusion-BERT/dataloader.py", line 13, in _load
dataset = datasets.load_dataset('lm1b', split=split)
File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 2594, in load_dataset
builder_instance = load_dataset_builder(
File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 2266, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 1827, in dataset_module_factory
).get_module()
File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 1040, in get_module
module_name, default_builder_kwargs = infer_module_for_data_files(
File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 598, in infer_module_for_data_files
raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else ""))
datasets.exceptions.DataFilesNotFoundError: No (supported) data files found in lm1b`
### Environment info
datasets: 2.20.0 | {
"login": "hml1996-fight",
"id": 72264324,
"node_id": "MDQ6VXNlcjcyMjY0MzI0",
"avatar_url": "https://avatars.githubusercontent.com/u/72264324?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hml1996-fight",
"html_url": "https://github.com/hml1996-fight",
"followers_url": "https://api.github.com/users/hml1996-fight/followers",
"following_url": "https://api.github.com/users/hml1996-fight/following{/other_user}",
"gists_url": "https://api.github.com/users/hml1996-fight/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hml1996-fight/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hml1996-fight/subscriptions",
"organizations_url": "https://api.github.com/users/hml1996-fight/orgs",
"repos_url": "https://api.github.com/users/hml1996-fight/repos",
"events_url": "https://api.github.com/users/hml1996-fight/events{/privacy}",
"received_events_url": "https://api.github.com/users/hml1996-fight/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7303/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7303/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7302 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7302/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7302/comments | https://api.github.com/repos/huggingface/datasets/issues/7302/events | https://github.com/huggingface/datasets/pull/7302 | 2,702,626,386 | PR_kwDODunzps6DfY8G | 7,302 | Let server decide default repo visibility | {
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-11-28T16:01:13 | 2024-11-29T17:00:40 | 2024-11-29T17:00:38 | CONTRIBUTOR | null | Until now, all repos were public by default when created without passing the `private` argument. This meant that passing `private=False` or `private=None` was strictly the same. This is not the case anymore. Enterprise Hub offers organizations to set a default visibility setting for new repos. This is useful for organizations forbidding public repos for security matters. This PR mostly updates docstrings + default values so that `private=None` is always passed when users don't set it manually.
This PR doesn't create any breaking change. The real update has been done server-side when introducing the new Enterprise Hub feature. Related to https://github.com/huggingface/huggingface_hub/pull/2679. | {
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7302/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7302/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7302",
"html_url": "https://github.com/huggingface/datasets/pull/7302",
"diff_url": "https://github.com/huggingface/datasets/pull/7302.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7302.patch",
"merged_at": "2024-11-29T17:00:38"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7301 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7301/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7301/comments | https://api.github.com/repos/huggingface/datasets/issues/7301/events | https://github.com/huggingface/datasets/pull/7301 | 2,701,813,922 | PR_kwDODunzps6DdYLZ | 7,301 | update load_dataset doctring | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-11-28T11:19:20 | 2024-11-29T10:31:43 | 2024-11-29T10:31:40 | MEMBER | null | - remove canonical dataset name
- remove dataset script logic
- add streaming info
- clearer download and prepare steps | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7301/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7301/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7301",
"html_url": "https://github.com/huggingface/datasets/pull/7301",
"diff_url": "https://github.com/huggingface/datasets/pull/7301.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7301.patch",
"merged_at": "2024-11-29T10:31:40"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7300 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7300/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7300/comments | https://api.github.com/repos/huggingface/datasets/issues/7300/events | https://github.com/huggingface/datasets/pull/7300 | 2,701,424,320 | PR_kwDODunzps6Dcba8 | 7,300 | fix: update elasticsearch version | {
"login": "ruidazeng",
"id": 31152346,
"node_id": "MDQ6VXNlcjMxMTUyMzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/31152346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ruidazeng",
"html_url": "https://github.com/ruidazeng",
"followers_url": "https://api.github.com/users/ruidazeng/followers",
"following_url": "https://api.github.com/users/ruidazeng/following{/other_user}",
"gists_url": "https://api.github.com/users/ruidazeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ruidazeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ruidazeng/subscriptions",
"organizations_url": "https://api.github.com/users/ruidazeng/orgs",
"repos_url": "https://api.github.com/users/ruidazeng/repos",
"events_url": "https://api.github.com/users/ruidazeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/ruidazeng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-11-28T09:14:21 | 2024-12-03T14:36:56 | 2024-12-03T14:24:42 | CONTRIBUTOR | null | This should fix the `test_py311 (windows latest, deps-latest` errors.
```
=========================== short test summary info ===========================
ERROR tests/test_search.py - AttributeError: `np.float_` was removed in the NumPy 2.0 release. Use `np.float64` instead.
ERROR tests/test_search.py - AttributeError: `np.float_` was removed in the NumPy 2.0 release. Use `np.float64` instead.
===== 2822 passed, 54 skipped, 10 warnings, 2 errors in 373.36s (0:06:13) =====
Error: Process completed with exit code 1.
```
The elasticsearch version used is `elasticsearch==7.9.1`, which is 4 years old and uses the removed `numpy.float_`.
elasticsearch fixed this in [https://github.com/elastic/elasticsearch-py/pull/2551](https://github.com/elastic/elasticsearch-py/pull/2551) and released in 8.15.0 (August 2024) and 7.17.12 (September 2024).
| {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7300/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7300/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7300",
"html_url": "https://github.com/huggingface/datasets/pull/7300",
"diff_url": "https://github.com/huggingface/datasets/pull/7300.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7300.patch",
"merged_at": "2024-12-03T14:24:42"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7299 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7299/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7299/comments | https://api.github.com/repos/huggingface/datasets/issues/7299/events | https://github.com/huggingface/datasets/issues/7299 | 2,695,378,251 | I_kwDODunzps6gqDVL | 7,299 | Efficient Image Augmentation in Hugging Face Datasets | {
"login": "fabiozappo",
"id": 46443190,
"node_id": "MDQ6VXNlcjQ2NDQzMTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/46443190?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fabiozappo",
"html_url": "https://github.com/fabiozappo",
"followers_url": "https://api.github.com/users/fabiozappo/followers",
"following_url": "https://api.github.com/users/fabiozappo/following{/other_user}",
"gists_url": "https://api.github.com/users/fabiozappo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fabiozappo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fabiozappo/subscriptions",
"organizations_url": "https://api.github.com/users/fabiozappo/orgs",
"repos_url": "https://api.github.com/users/fabiozappo/repos",
"events_url": "https://api.github.com/users/fabiozappo/events{/privacy}",
"received_events_url": "https://api.github.com/users/fabiozappo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-11-26T16:50:32 | 2024-11-26T16:53:53 | null | NONE | null | ### Describe the bug
I'm using the Hugging Face datasets library to load images in batch and would like to apply a torchvision transform to solve the inconsistent image sizes in the dataset and apply some on the fly image augmentation. I can just think about using the collate_fn, but seems quite inefficient.
I'm new to the Hugging Face datasets library, I didn't find nothing in the documentation or the issues here on github.
Is there an existing way to add image transformations directly to the dataset loading pipeline?
### Steps to reproduce the bug
from datasets import load_dataset
from torch.utils.data import DataLoader
```python
def collate_fn(batch):
images = [item['image'] for item in batch]
texts = [item['text'] for item in batch]
return {
'images': images,
'texts': texts
}
dataset = load_dataset("Yuki20/pokemon_caption", split="train")
dataloader = DataLoader(dataset, batch_size=4, collate_fn=collate_fn)
# Output shows varying image sizes:
# [(1280, 1280), (431, 431), (789, 789), (769, 769)]
```
### Expected behavior
I'm looking for a way to resize images on-the-fly when loading the dataset, similar to PyTorch's Dataset.__getitem__ functionality. This would be more efficient than handling resizing in the collate_fn.
### Environment info
- `datasets` version: 3.1.0
- Platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.11.10
- `huggingface_hub` version: 0.26.2
- PyArrow version: 18.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7299/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7299/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7298 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7298/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7298/comments | https://api.github.com/repos/huggingface/datasets/issues/7298/events | https://github.com/huggingface/datasets/issues/7298 | 2,694,196,968 | I_kwDODunzps6gli7o | 7,298 | loading dataset issue with load_dataset() when training controlnet | {
"login": "bigbraindump",
"id": 81594044,
"node_id": "MDQ6VXNlcjgxNTk0MDQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/81594044?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bigbraindump",
"html_url": "https://github.com/bigbraindump",
"followers_url": "https://api.github.com/users/bigbraindump/followers",
"following_url": "https://api.github.com/users/bigbraindump/following{/other_user}",
"gists_url": "https://api.github.com/users/bigbraindump/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bigbraindump/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bigbraindump/subscriptions",
"organizations_url": "https://api.github.com/users/bigbraindump/orgs",
"repos_url": "https://api.github.com/users/bigbraindump/repos",
"events_url": "https://api.github.com/users/bigbraindump/events{/privacy}",
"received_events_url": "https://api.github.com/users/bigbraindump/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-11-26T10:50:18 | 2024-11-26T10:50:18 | null | NONE | null | ### Describe the bug
i'm unable to load my dataset for [controlnet training](https://github.com/huggingface/diffusers/blob/074e12358bc17e7dbe111ea4f62f05dbae8a49d5/examples/controlnet/train_controlnet.py#L606) using load_dataset(). however, load_from_disk() seems to work?
would appreciate if someone can explain why that's the case here
1. for reference here's the structure of the original training files _before_ dataset creation -
```
- dir train
- dir A (illustrations)
- dir B (SignWriting)
- prompt.json containing:
{"source": "B/file.png", "target": "A/file.png", "prompt": "..."}
```
2. here are features _after_ dataset creation -
```
"features": {
"control_image": {
"_type": "Image"
},
"image": {
"_type": "Image"
},
"caption": {
"dtype": "string",
"_type": "Value"
}
```
3. I've also attempted to upload the dataset to huggingface with the same error output
### Steps to reproduce the bug
1. [dataset creation script](https://github.com/sign-language-processing/signwriting-illustration/blob/main/signwriting_illustration/controlnet_huggingface/dataset.py)
2. controlnet [training script](examples/controlnet/train_controlnet.py) used
3. training parameters -
! accelerate launch diffusers/examples/controlnet/train_controlnet.py \
--pretrained_model_name_or_path="stable-diffusion-v1-5/stable-diffusion-v1-5" \
--output_dir="$OUTPUT_DIR" \
--train_data_dir="$HF_DATASET_DIR" \
--conditioning_image_column=control_image \
--image_column=image \
--caption_column=caption \
--resolution=512\
--learning_rate=1e-5 \
--validation_image "./validation/0a4b3c71265bb3a726457837428dda78.png" "./validation/0a5922fe2c638e6776bd62f623145004.png" "./validation/1c9f1a53106f64c682cf5d009ee7156f.png" \
--validation_prompt "An illustration of a man with short hair" "An illustration of a woman with short hair" "An illustration of Barack Obama" \
--train_batch_size=4 \
--num_train_epochs=500 \
--tracker_project_name="sd-controlnet-signwriting-test" \
--hub_model_id="sarahahtee/signwriting-illustration-test" \
--checkpointing_steps=5000 \
--validation_steps=1000 \
--report_to wandb \
--push_to_hub
4. command -
` sbatch --export=HUGGINGFACE_TOKEN=hf_token,WANDB_API_KEY=api_key script.sh`
### Expected behavior
```
11/25/2024 17:12:18 - INFO - __main__ - Initializing controlnet weights from unet
Generating train split: 1 examples [00:00, 334.85 examples/s]
Traceback (most recent call last):
File "/data/user/user/signwriting_illustration/controlnet_huggingface/diffusers/examples/controlnet/train_controlnet.py", line 1189, in <module>
main(args)
File "/data/user/user/signwriting_illustration/controlnet_huggingface/diffusers/examples/controlnet/train_controlnet.py", line 923, in main
train_dataset = make_train_dataset(args, tokenizer, accelerator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/user/user/signwriting_illustration/controlnet_huggingface/diffusers/examples/controlnet/train_controlnet.py", line 639, in make_train_dataset
raise ValueError(
ValueError: `--image_column` value 'image' not found in dataset columns. Dataset columns are: _data_files, _fingerprint, _format_columns, _format_kwargs, _format_type, _output_all_columns, _split
```
### Environment info
accelerate 1.1.1
huggingface-hub 0.26.2
python 3.11
torch 2.5.1
transformers 4.46.2 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7298/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7298/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7297 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7297/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7297/comments | https://api.github.com/repos/huggingface/datasets/issues/7297/events | https://github.com/huggingface/datasets/issues/7297 | 2,683,977,430 | I_kwDODunzps6f-j7W | 7,297 | wrong return type for `IterableDataset.shard()` | {
"login": "ysngshn",
"id": 47225236,
"node_id": "MDQ6VXNlcjQ3MjI1MjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/47225236?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ysngshn",
"html_url": "https://github.com/ysngshn",
"followers_url": "https://api.github.com/users/ysngshn/followers",
"following_url": "https://api.github.com/users/ysngshn/following{/other_user}",
"gists_url": "https://api.github.com/users/ysngshn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ysngshn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ysngshn/subscriptions",
"organizations_url": "https://api.github.com/users/ysngshn/orgs",
"repos_url": "https://api.github.com/users/ysngshn/repos",
"events_url": "https://api.github.com/users/ysngshn/events{/privacy}",
"received_events_url": "https://api.github.com/users/ysngshn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-11-22T17:25:46 | 2024-12-03T14:27:27 | 2024-12-03T14:27:03 | NONE | null | ### Describe the bug
`IterableDataset.shard()` has the wrong typing for its return as `"Dataset"`. It should be `"IterableDataset"`. Makes my IDE unhappy.
### Steps to reproduce the bug
look at [the source code](https://github.com/huggingface/datasets/blob/main/src/datasets/iterable_dataset.py#L2668)?
### Expected behavior
Correct return type as `"IterableDataset"`
### Environment info
datasets==3.1.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7297/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7297/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7296 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7296/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7296/comments | https://api.github.com/repos/huggingface/datasets/issues/7296/events | https://github.com/huggingface/datasets/pull/7296 | 2,675,573,974 | PR_kwDODunzps6ChJIJ | 7,296 | Remove upper version limit of fsspec[http] | {
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-11-20T11:29:16 | 2024-11-20T11:29:16 | null | NONE | null | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7296/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7296/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7296",
"html_url": "https://github.com/huggingface/datasets/pull/7296",
"diff_url": "https://github.com/huggingface/datasets/pull/7296.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7296.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7295 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7295/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7295/comments | https://api.github.com/repos/huggingface/datasets/issues/7295/events | https://github.com/huggingface/datasets/issues/7295 | 2,672,003,384 | I_kwDODunzps6fQ4k4 | 7,295 | [BUG]: Streaming from S3 triggers `unexpected keyword argument 'requote_redirect_url'` | {
"login": "casper-hansen",
"id": 27340033,
"node_id": "MDQ6VXNlcjI3MzQwMDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/27340033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/casper-hansen",
"html_url": "https://github.com/casper-hansen",
"followers_url": "https://api.github.com/users/casper-hansen/followers",
"following_url": "https://api.github.com/users/casper-hansen/following{/other_user}",
"gists_url": "https://api.github.com/users/casper-hansen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/casper-hansen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/casper-hansen/subscriptions",
"organizations_url": "https://api.github.com/users/casper-hansen/orgs",
"repos_url": "https://api.github.com/users/casper-hansen/repos",
"events_url": "https://api.github.com/users/casper-hansen/events{/privacy}",
"received_events_url": "https://api.github.com/users/casper-hansen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-11-19T12:23:36 | 2024-11-19T13:01:53 | null | NONE | null | ### Describe the bug
Note that this bug is only triggered when `streaming=True`. #5459 introduced always calling fsspec with `client_kwargs={"requote_redirect_url": False}`, which seems to have incompatibility issues even in the newest versions.
Analysis of what's happening:
1. `datasets` passes the `client_kwargs` through `fsspec`
2. `fsspec` passes the `client_kwargs` through `s3fs`
3. `s3fs` passes the `client_kwargs` to `aiobotocore` which uses `aiohttp`
```
s3creator = self.session.create_client(
"s3", config=conf, **init_kwargs, **client_kwargs
)
```
4. The `session` tries to create an `aiohttp` session but the `**kwargs` are not just kept as unfolded `**kwargs` but passed in as individual variables (`requote_redirect_url` and `trust_env`).
Error:
```
Traceback (most recent call last):
File "/Users/cxrh/Documents/GitHub/nlp_foundation/nlp_train/test.py", line 14, in <module>
batch = next(iter(ds))
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1353, in __iter__
for key, example in ex_iterable:
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 255, in __iter__
for key, pa_table in self.generate_tables_fn(**self.kwargs):
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/packaged_modules/json/json.py", line 78, in _generate_tables
for file_idx, file in enumerate(itertools.chain.from_iterable(files)):
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py", line 840, in __iter__
yield from self.generator(*self.args, **self.kwargs)
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py", line 921, in _iter_from_urlpaths
elif xisdir(urlpath, download_config=download_config):
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py", line 305, in xisdir
return fs.isdir(inner_path)
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/spec.py", line 721, in isdir
return self.info(path)["type"] == "directory"
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/archive.py", line 38, in info
self._get_dirs()
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/filesystems/compression.py", line 64, in _get_dirs
f = {**self.file.fs.info(self.file.path), "name": self.uncompressed_name}
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/asyn.py", line 118, in wrapper
return sync(self.loop, func, *args, **kwargs)
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/asyn.py", line 103, in sync
raise return_result
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/asyn.py", line 56, in _runner
result[0] = await coro
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/s3fs/core.py", line 1302, in _info
out = await self._call_s3(
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/s3fs/core.py", line 341, in _call_s3
await self.set_session()
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/s3fs/core.py", line 524, in set_session
s3creator = self.session.create_client(
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/aiobotocore/session.py", line 114, in create_client
return ClientCreatorContext(self._create_client(*args, **kwargs))
TypeError: AioSession._create_client() got an unexpected keyword argument 'requote_redirect_url'
```
### Steps to reproduce the bug
1. Install the necessary libraries, datasets having a requirement for being at least 2.19.0:
```
pip install s3fs fsspec aiohttp aiobotocore botocore 'datasets>=2.19.0'
```
2. Run this code:
```
from datasets import load_dataset
ds = load_dataset(
"json",
data_files="s3://your_path/*.jsonl.gz",
streaming=True,
split="train",
)
batch = next(iter(ds))
print(batch)
```
3. You get the `unexpected keyword argument 'requote_redirect_url'` error.
### Expected behavior
The datasets is able to load a batch from the dataset stored on S3, without triggering this `requote_redirect_url` error.
Fix: I could fix this by directly removing the `requote_redirect_url` and `trust_env` - then it loads properly.
<img width="1127" alt="image" src="https://github.com/user-attachments/assets/4c40efa9-8787-4919-b613-e4908c3d1ab2">
### Environment info
- `datasets` version: 3.1.0
- Platform: macOS-15.1-arm64-arm-64bit
- Python version: 3.10.15
- `huggingface_hub` version: 0.26.2
- PyArrow version: 18.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7295/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7295/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7294 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7294/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7294/comments | https://api.github.com/repos/huggingface/datasets/issues/7294/events | https://github.com/huggingface/datasets/pull/7294 | 2,668,663,130 | PR_kwDODunzps6CQKTy | 7,294 | Remove `aiohttp` from direct dependencies | {
"login": "akx",
"id": 58669,
"node_id": "MDQ6VXNlcjU4NjY5",
"avatar_url": "https://avatars.githubusercontent.com/u/58669?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akx",
"html_url": "https://github.com/akx",
"followers_url": "https://api.github.com/users/akx/followers",
"following_url": "https://api.github.com/users/akx/following{/other_user}",
"gists_url": "https://api.github.com/users/akx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akx/subscriptions",
"organizations_url": "https://api.github.com/users/akx/orgs",
"repos_url": "https://api.github.com/users/akx/repos",
"events_url": "https://api.github.com/users/akx/events{/privacy}",
"received_events_url": "https://api.github.com/users/akx/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-11-18T14:00:59 | 2024-11-18T14:00:59 | null | NONE | null | The dependency is only used for catching an exception from other code. That can be done with an import guard.
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7294/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7294/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7294",
"html_url": "https://github.com/huggingface/datasets/pull/7294",
"diff_url": "https://github.com/huggingface/datasets/pull/7294.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7294.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7293 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7293/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7293/comments | https://api.github.com/repos/huggingface/datasets/issues/7293/events | https://github.com/huggingface/datasets/pull/7293 | 2,664,592,054 | PR_kwDODunzps6CIjS- | 7,293 | Updated inconsistent output in documentation examples for `ClassLabel` | {
"login": "sergiopaniego",
"id": 17179696,
"node_id": "MDQ6VXNlcjE3MTc5Njk2",
"avatar_url": "https://avatars.githubusercontent.com/u/17179696?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sergiopaniego",
"html_url": "https://github.com/sergiopaniego",
"followers_url": "https://api.github.com/users/sergiopaniego/followers",
"following_url": "https://api.github.com/users/sergiopaniego/following{/other_user}",
"gists_url": "https://api.github.com/users/sergiopaniego/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sergiopaniego/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sergiopaniego/subscriptions",
"organizations_url": "https://api.github.com/users/sergiopaniego/orgs",
"repos_url": "https://api.github.com/users/sergiopaniego/repos",
"events_url": "https://api.github.com/users/sergiopaniego/events{/privacy}",
"received_events_url": "https://api.github.com/users/sergiopaniego/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2024-11-16T16:20:57 | 2024-12-06T11:33:33 | 2024-12-06T11:32:01 | CONTRIBUTOR | null | fix #7129
@stevhliu | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7293/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7293/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7293",
"html_url": "https://github.com/huggingface/datasets/pull/7293",
"diff_url": "https://github.com/huggingface/datasets/pull/7293.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7293.patch",
"merged_at": "2024-12-06T11:32:01"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7292 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7292/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7292/comments | https://api.github.com/repos/huggingface/datasets/issues/7292/events | https://github.com/huggingface/datasets/issues/7292 | 2,664,250,855 | I_kwDODunzps6ezT3n | 7,292 | DataFilesNotFoundError for datasets `OpenMol/PubChemSFT` | {
"login": "xnuohz",
"id": 17878022,
"node_id": "MDQ6VXNlcjE3ODc4MDIy",
"avatar_url": "https://avatars.githubusercontent.com/u/17878022?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xnuohz",
"html_url": "https://github.com/xnuohz",
"followers_url": "https://api.github.com/users/xnuohz/followers",
"following_url": "https://api.github.com/users/xnuohz/following{/other_user}",
"gists_url": "https://api.github.com/users/xnuohz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xnuohz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xnuohz/subscriptions",
"organizations_url": "https://api.github.com/users/xnuohz/orgs",
"repos_url": "https://api.github.com/users/xnuohz/repos",
"events_url": "https://api.github.com/users/xnuohz/events{/privacy}",
"received_events_url": "https://api.github.com/users/xnuohz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2024-11-16T11:54:31 | 2024-11-19T00:53:00 | 2024-11-19T00:52:59 | NONE | null | ### Describe the bug
Cannot load the dataset https://huggingface.co/datasets/OpenMol/PubChemSFT
### Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset('OpenMol/PubChemSFT')
```
### Expected behavior
```
---------------------------------------------------------------------------
DataFilesNotFoundError Traceback (most recent call last)
Cell In[7], [line 2](vscode-notebook-cell:?execution_count=7&line=2)
[1](vscode-notebook-cell:?execution_count=7&line=1) from datasets import load_dataset
----> [2](vscode-notebook-cell:?execution_count=7&line=2) dataset = load_dataset('OpenMol/PubChemSFT')
File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2587, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
[2582](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2582) verification_mode = VerificationMode(
[2583](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2583) (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS
[2584](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2584) )
[2586](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2586) # Create a dataset builder
-> [2587](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2587) builder_instance = load_dataset_builder(
[2588](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2588) path=path,
[2589](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2589) name=name,
[2590](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2590) data_dir=data_dir,
[2591](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2591) data_files=data_files,
[2592](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2592) cache_dir=cache_dir,
[2593](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2593) features=features,
[2594](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2594) download_config=download_config,
[2595](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2595) download_mode=download_mode,
[2596](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2596) revision=revision,
[2597](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2597) token=token,
[2598](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2598) storage_options=storage_options,
[2599](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2599) trust_remote_code=trust_remote_code,
[2600](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2600) _require_default_config_name=name is None,
[2601](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2601) **config_kwargs,
[2602](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2602) )
[2604](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2604) # Return iterable dataset in case of streaming
[2605](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2605) if streaming:
File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2259, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs)
[2257](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2257) download_config = download_config.copy() if download_config else DownloadConfig()
[2258](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2258) download_config.storage_options.update(storage_options)
-> [2259](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2259) dataset_module = dataset_module_factory(
[2260](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2260) path,
[2261](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2261) revision=revision,
[2262](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2262) download_config=download_config,
[2263](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2263) download_mode=download_mode,
[2264](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2264) data_dir=data_dir,
[2265](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2265) data_files=data_files,
[2266](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2266) cache_dir=cache_dir,
[2267](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2267) trust_remote_code=trust_remote_code,
[2268](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2268) _require_default_config_name=_require_default_config_name,
[2269](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2269) _require_custom_configs=bool(config_kwargs),
[2270](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2270) )
[2271](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2271) # Get dataset builder class from the processing script
[2272](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2272) builder_kwargs = dataset_module.builder_kwargs
File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1904, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs)
[1902](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1902) raise ConnectionError(f"Couldn't reach the Hugging Face Hub for dataset '{path}': {e1}") from None
[1903](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1903) if isinstance(e1, (DataFilesNotFoundError, DatasetNotFoundError, EmptyDatasetError)):
-> [1904](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1904) raise e1 from None
[1905](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1905) if isinstance(e1, FileNotFoundError):
[1906](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1906) raise FileNotFoundError(
[1907](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1907) f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
[1908](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1908) f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
[1909](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1909) ) from None
File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1885, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs)
[1876](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1876) return HubDatasetModuleFactoryWithScript(
[1877](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1877) path,
[1878](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1878) revision=revision,
(...)
[1882](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1882) trust_remote_code=trust_remote_code,
[1883](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1883) ).get_module()
[1884](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1884) else:
-> [1885](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1885) return HubDatasetModuleFactoryWithoutScript(
[1886](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1886) path,
[1887](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1887) revision=revision,
[1888](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1888) data_dir=data_dir,
[1889](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1889) data_files=data_files,
[1890](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1890) download_config=download_config,
[1891](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1891) download_mode=download_mode,
[1892](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1892) ).get_module()
[1893](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1893) except Exception as e1:
[1894](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1894) # All the attempts failed, before raising the error we should check if the module is already cached
[1895](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1895) try:
File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1270, in HubDatasetModuleFactoryWithoutScript.get_module(self)
[1263](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1263) patterns = get_data_patterns(base_path, download_config=self.download_config)
[1264](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1264) data_files = DataFilesDict.from_patterns(
[1265](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1265) patterns,
[1266](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1266) base_path=base_path,
[1267](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1267) allowed_extensions=ALL_ALLOWED_EXTENSIONS,
[1268](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1268) download_config=self.download_config,
[1269](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1269) )
-> [1270](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1270) module_name, default_builder_kwargs = infer_module_for_data_files(
[1271](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1271) data_files=data_files,
[1272](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1272) path=self.name,
[1273](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1273) download_config=self.download_config,
[1274](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1274) )
[1275](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1275) data_files = data_files.filter_extensions(_MODULE_TO_EXTENSIONS[module_name])
[1276](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1276) # Collect metadata files if the module supports them
File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:597, in infer_module_for_data_files(data_files, path, download_config)
[595](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:595) raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}")
[596](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:596) if not module_name:
--> [597](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:597) raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else ""))
[598](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:598) return module_name, default_builder_kwargs
DataFilesNotFoundError: No (supported) data files found in OpenMol/PubChemSFT
```
### Environment info
```
- `datasets` version: 3.1.0
- Platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.31
- Python version: 3.9.18
- `huggingface_hub` version: 0.25.2
- PyArrow version: 18.0.0
- Pandas version: 2.0.3
- `fsspec` version: 2023.9.2
``` | {
"login": "xnuohz",
"id": 17878022,
"node_id": "MDQ6VXNlcjE3ODc4MDIy",
"avatar_url": "https://avatars.githubusercontent.com/u/17878022?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xnuohz",
"html_url": "https://github.com/xnuohz",
"followers_url": "https://api.github.com/users/xnuohz/followers",
"following_url": "https://api.github.com/users/xnuohz/following{/other_user}",
"gists_url": "https://api.github.com/users/xnuohz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xnuohz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xnuohz/subscriptions",
"organizations_url": "https://api.github.com/users/xnuohz/orgs",
"repos_url": "https://api.github.com/users/xnuohz/repos",
"events_url": "https://api.github.com/users/xnuohz/events{/privacy}",
"received_events_url": "https://api.github.com/users/xnuohz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7292/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7292/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7291 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7291/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7291/comments | https://api.github.com/repos/huggingface/datasets/issues/7291/events | https://github.com/huggingface/datasets/issues/7291 | 2,662,244,643 | I_kwDODunzps6erqEj | 7,291 | Why return_tensors='pt' doesn't work? | {
"login": "bw-wang19",
"id": 86752851,
"node_id": "MDQ6VXNlcjg2NzUyODUx",
"avatar_url": "https://avatars.githubusercontent.com/u/86752851?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bw-wang19",
"html_url": "https://github.com/bw-wang19",
"followers_url": "https://api.github.com/users/bw-wang19/followers",
"following_url": "https://api.github.com/users/bw-wang19/following{/other_user}",
"gists_url": "https://api.github.com/users/bw-wang19/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bw-wang19/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bw-wang19/subscriptions",
"organizations_url": "https://api.github.com/users/bw-wang19/orgs",
"repos_url": "https://api.github.com/users/bw-wang19/repos",
"events_url": "https://api.github.com/users/bw-wang19/events{/privacy}",
"received_events_url": "https://api.github.com/users/bw-wang19/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-11-15T15:01:23 | 2024-11-18T13:47:08 | null | NONE | null | ### Describe the bug
I tried to add input_ids to dataset with map(), and I used the return_tensors='pt', but why I got the callback with the type of List?
![image](https://github.com/user-attachments/assets/ab046e20-2174-4e91-9cd6-4a296a43e83c)
### Steps to reproduce the bug
![image](https://github.com/user-attachments/assets/5d504d4c-22c7-4742-99a1-9cab78739b17)
### Expected behavior
Sorry for this silly question, I'm noob on using this tool. But I think it should return a tensor value as I have used the protocol?
When I tokenize only one sentence using tokenized_input=tokenizer(input, return_tensors='pt' ),it does return in tensor type. Why doesn't it work in map()?
### Environment info
transformers>=4.41.2,<=4.45.0
datasets>=2.16.0,<=2.21.0
accelerate>=0.30.1,<=0.34.2
peft>=0.11.1,<=0.12.0
trl>=0.8.6,<=0.9.6
gradio>=4.0.0
pandas>=2.0.0
scipy
einops
sentencepiece
tiktoken
protobuf
uvicorn
pydantic
fastapi
sse-starlette
matplotlib>=3.7.0
fire
packaging
pyyaml
numpy<2.0.0
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7291/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7291/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7290 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7290/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7290/comments | https://api.github.com/repos/huggingface/datasets/issues/7290/events | https://github.com/huggingface/datasets/issues/7290 | 2,657,620,816 | I_kwDODunzps6eaBNQ | 7,290 | `Dataset.save_to_disk` hangs when using num_proc > 1 | {
"login": "JohannesAck",
"id": 22243463,
"node_id": "MDQ6VXNlcjIyMjQzNDYz",
"avatar_url": "https://avatars.githubusercontent.com/u/22243463?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JohannesAck",
"html_url": "https://github.com/JohannesAck",
"followers_url": "https://api.github.com/users/JohannesAck/followers",
"following_url": "https://api.github.com/users/JohannesAck/following{/other_user}",
"gists_url": "https://api.github.com/users/JohannesAck/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JohannesAck/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JohannesAck/subscriptions",
"organizations_url": "https://api.github.com/users/JohannesAck/orgs",
"repos_url": "https://api.github.com/users/JohannesAck/repos",
"events_url": "https://api.github.com/users/JohannesAck/events{/privacy}",
"received_events_url": "https://api.github.com/users/JohannesAck/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-11-14T05:25:13 | 2024-11-14T05:25:13 | null | NONE | null | ### Describe the bug
Hi, I'm encountered a small issue when saving datasets that led to the saving taking up to multiple hours.
Specifically, [`Dataset.save_to_disk`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.save_to_disk) is a lot slower when using `num_proc>1` than when using `num_proc=1`
The documentation mentions that "Multiprocessing is disabled by default.", but there is no explanation on how to enable it.
### Steps to reproduce the bug
```
import numpy as np
from datasets import Dataset
n_samples = int(4e6)
n_tokens_sample = 100
data_dict = {
'tokens' : np.random.randint(0, 100, (n_samples, n_tokens_sample)),
}
dataset = Dataset.from_dict(data_dict)
dataset.save_to_disk('test_dataset', num_proc=1)
dataset.save_to_disk('test_dataset', num_proc=4)
dataset.save_to_disk('test_dataset', num_proc=8)
```
This results in:
```
>>> dataset.save_to_disk('test_dataset', num_proc=1)
Saving the dataset (7/7 shards): 100%|██████████████| 4000000/4000000 [00:17<00:00, 228075.15 examples/s]
>>> dataset.save_to_disk('test_dataset', num_proc=4)
Saving the dataset (7/7 shards): 100%|██████████████| 4000000/4000000 [01:49<00:00, 36583.75 examples/s]
>>> dataset.save_to_disk('test_dataset', num_proc=8)
Saving the dataset (8/8 shards): 100%|██████████████| 4000000/4000000 [02:11<00:00, 30518.43 examples/s]
```
With larger datasets it can take hours, but I didn't benchmark that for this bug report.
### Expected behavior
I would expect using `num_proc>1` to be faster instead of slower than `num_proc=1`.
### Environment info
- `datasets` version: 3.1.0
- Platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.26.2
- PyArrow version: 18.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.6.1 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7290/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7290/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7289 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7289/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7289/comments | https://api.github.com/repos/huggingface/datasets/issues/7289/events | https://github.com/huggingface/datasets/issues/7289 | 2,648,019,507 | I_kwDODunzps6d1ZIz | 7,289 | Dataset viewer displays wrong statists | {
"login": "speedcell4",
"id": 3585459,
"node_id": "MDQ6VXNlcjM1ODU0NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3585459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/speedcell4",
"html_url": "https://github.com/speedcell4",
"followers_url": "https://api.github.com/users/speedcell4/followers",
"following_url": "https://api.github.com/users/speedcell4/following{/other_user}",
"gists_url": "https://api.github.com/users/speedcell4/gists{/gist_id}",
"starred_url": "https://api.github.com/users/speedcell4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/speedcell4/subscriptions",
"organizations_url": "https://api.github.com/users/speedcell4/orgs",
"repos_url": "https://api.github.com/users/speedcell4/repos",
"events_url": "https://api.github.com/users/speedcell4/events{/privacy}",
"received_events_url": "https://api.github.com/users/speedcell4/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-11-11T03:29:27 | 2024-11-13T13:02:25 | 2024-11-13T13:02:25 | NONE | null | ### Describe the bug
In [my dataset](https://huggingface.co/datasets/speedcell4/opus-unigram2), there is a column called `lang2`, and there are 94 different classes in total, but the viewer says there are 83 values only. This issue only arises in the `train` split. The total number of values is also 94 in the `test` and `dev` columns, viewer tells the correct number of them.
<img width="177" alt="image" src="https://github.com/user-attachments/assets/78d76ef2-fe0e-4fa3-85e0-fb2552813d1c">
### Steps to reproduce the bug
```python3
from datasets import load_dataset
ds = load_dataset('speedcell4/opus-unigram2').unique('lang2')
for key, lang2 in ds.items():
print(key, len(lang2))
```
This script returns the following and tells that the `train` split has 94 values in the `lang2` column.
```
train 94
dev 94
test 94
zero 5
```
### Expected behavior
94 in the reviewer.
### Environment info
Collecting environment information...
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: CentOS Linux release 8.2.2004 (Core) (x86_64)
GCC version: (GCC) 8.3.1 20191121 (Red Hat 8.3.1-5)
Clang version: Could not collect
CMake version: version 3.11.4
Libc version: glibc-2.28
Python version: 3.9.20 (main, Oct 3 2024, 07:27:41) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.18.0-193.28.1.el8_2.x86_64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 525.85.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 1
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 4
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7542 32-Core Processor
Stepping: 0
CPU MHz: 3389.114
BogoMIPS: 5789.40
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 16384K
NUMA node0 CPU(s): 0-15
NUMA node1 CPU(s): 16-31
NUMA node2 CPU(s): 32-47
NUMA node3 CPU(s): 48-63
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.4.1+cu121
[pip3] torchaudio==2.4.1+cu121
[pip3] torchdevice==0.1.1
[pip3] torchglyph==0.3.2
[pip3] torchmetrics==1.5.0
[pip3] torchrua==0.5.1
[pip3] torchvision==0.19.1+cu121
[pip3] triton==3.0.0
[pip3] datasets==3.0.1
[conda] numpy 1.26.4 pypi_0 pypi
[conda] torch 2.4.1+cu121 pypi_0 pypi
[conda] torchaudio 2.4.1+cu121 pypi_0 pypi
[conda] torchdevice 0.1.1 pypi_0 pypi
[conda] torchglyph 0.3.2 pypi_0 pypi
[conda] torchmetrics 1.5.0 pypi_0 pypi
[conda] torchrua 0.5.1 pypi_0 pypi
[conda] torchvision 0.19.1+cu121 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi | {
"login": "speedcell4",
"id": 3585459,
"node_id": "MDQ6VXNlcjM1ODU0NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3585459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/speedcell4",
"html_url": "https://github.com/speedcell4",
"followers_url": "https://api.github.com/users/speedcell4/followers",
"following_url": "https://api.github.com/users/speedcell4/following{/other_user}",
"gists_url": "https://api.github.com/users/speedcell4/gists{/gist_id}",
"starred_url": "https://api.github.com/users/speedcell4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/speedcell4/subscriptions",
"organizations_url": "https://api.github.com/users/speedcell4/orgs",
"repos_url": "https://api.github.com/users/speedcell4/repos",
"events_url": "https://api.github.com/users/speedcell4/events{/privacy}",
"received_events_url": "https://api.github.com/users/speedcell4/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7289/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7289/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7288 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7288/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7288/comments | https://api.github.com/repos/huggingface/datasets/issues/7288/events | https://github.com/huggingface/datasets/pull/7288 | 2,647,052,280 | PR_kwDODunzps6BbIpz | 7,288 | Release v3.1.1 | {
"login": "alex-hh",
"id": 5719745,
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alex-hh",
"html_url": "https://github.com/alex-hh",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-11-10T09:38:15 | 2024-11-10T09:38:48 | 2024-11-10T09:38:48 | CONTRIBUTOR | null | null | {
"login": "alex-hh",
"id": 5719745,
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alex-hh",
"html_url": "https://github.com/alex-hh",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7288/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7288/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7288",
"html_url": "https://github.com/huggingface/datasets/pull/7288",
"diff_url": "https://github.com/huggingface/datasets/pull/7288.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7288.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7287 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7287/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7287/comments | https://api.github.com/repos/huggingface/datasets/issues/7287/events | https://github.com/huggingface/datasets/issues/7287 | 2,646,958,393 | I_kwDODunzps6dxWE5 | 7,287 | Support for identifier-based automated split construction | {
"login": "alex-hh",
"id": 5719745,
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alex-hh",
"html_url": "https://github.com/alex-hh",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 3 | 2024-11-10T07:45:19 | 2024-11-19T14:37:02 | null | CONTRIBUTOR | null | ### Feature request
As far as I understand, automated construction of splits for hub datasets is currently based on either file names or directory structure ([as described here](https://huggingface.co/docs/datasets/en/repository_structure))
It would seem to be pretty useful to also allow splits to be based on identifiers of individual examples
This could be configured like
{"split_name": {"column_name": [column values in split]}}
(This in turn requires unique 'index' columns, which could be explicitly supported or just assumed to be defined appropriately by the user).
I guess a potential downside would be that shards would end up spanning different splits - is this something that can be handled somehow? Would this only affect streaming from hub?
### Motivation
The main motivation would be that all data files could be stored in a single directory, and multiple sets of splits could be generated from the same data. This is often useful for large datasets with multiple distinct sets of splits.
This could all be configured via the README.md yaml configs
### Your contribution
May be able to contribute if it seems like a good idea | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7287/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7287/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7286 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7286/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7286/comments | https://api.github.com/repos/huggingface/datasets/issues/7286/events | https://github.com/huggingface/datasets/issues/7286 | 2,645,350,151 | I_kwDODunzps6drNcH | 7,286 | Concurrent loading in `load_from_disk` - `num_proc` as a param | {
"login": "unography",
"id": 5240449,
"node_id": "MDQ6VXNlcjUyNDA0NDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5240449?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/unography",
"html_url": "https://github.com/unography",
"followers_url": "https://api.github.com/users/unography/followers",
"following_url": "https://api.github.com/users/unography/following{/other_user}",
"gists_url": "https://api.github.com/users/unography/gists{/gist_id}",
"starred_url": "https://api.github.com/users/unography/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/unography/subscriptions",
"organizations_url": "https://api.github.com/users/unography/orgs",
"repos_url": "https://api.github.com/users/unography/repos",
"events_url": "https://api.github.com/users/unography/events{/privacy}",
"received_events_url": "https://api.github.com/users/unography/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 0 | 2024-11-08T23:21:40 | 2024-11-09T16:14:37 | 2024-11-09T16:14:37 | NONE | null | ### Feature request
https://github.com/huggingface/datasets/pull/6464 mentions a `num_proc` param while loading dataset from disk, but can't find that in the documentation and code anywhere
### Motivation
Make loading large datasets from disk faster
### Your contribution
Happy to contribute if given pointers | {
"login": "unography",
"id": 5240449,
"node_id": "MDQ6VXNlcjUyNDA0NDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5240449?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/unography",
"html_url": "https://github.com/unography",
"followers_url": "https://api.github.com/users/unography/followers",
"following_url": "https://api.github.com/users/unography/following{/other_user}",
"gists_url": "https://api.github.com/users/unography/gists{/gist_id}",
"starred_url": "https://api.github.com/users/unography/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/unography/subscriptions",
"organizations_url": "https://api.github.com/users/unography/orgs",
"repos_url": "https://api.github.com/users/unography/repos",
"events_url": "https://api.github.com/users/unography/events{/privacy}",
"received_events_url": "https://api.github.com/users/unography/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7286/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7286/timeline | null | not_planned | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7285 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7285/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7285/comments | https://api.github.com/repos/huggingface/datasets/issues/7285/events | https://github.com/huggingface/datasets/pull/7285 | 2,644,488,598 | PR_kwDODunzps6BV3Gu | 7,285 | Release v3.1.0 | {
"login": "alex-hh",
"id": 5719745,
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alex-hh",
"html_url": "https://github.com/alex-hh",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-11-08T16:17:58 | 2024-11-08T16:18:05 | 2024-11-08T16:18:05 | CONTRIBUTOR | null | null | {
"login": "alex-hh",
"id": 5719745,
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alex-hh",
"html_url": "https://github.com/alex-hh",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7285/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7285/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7285",
"html_url": "https://github.com/huggingface/datasets/pull/7285",
"diff_url": "https://github.com/huggingface/datasets/pull/7285.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7285.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7284 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7284/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7284/comments | https://api.github.com/repos/huggingface/datasets/issues/7284/events | https://github.com/huggingface/datasets/pull/7284 | 2,644,302,386 | PR_kwDODunzps6BVUSh | 7,284 | support for custom feature encoding/decoding | {
"login": "alex-hh",
"id": 5719745,
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alex-hh",
"html_url": "https://github.com/alex-hh",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-11-08T15:04:08 | 2024-11-21T16:09:47 | 2024-11-21T16:09:47 | CONTRIBUTOR | null | Fix for https://github.com/huggingface/datasets/issues/7220 as suggested in discussion, in preference to #7221
(only concern would be on effect on type checking with custom feature types that aren't covered by FeatureType?) | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7284/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7284/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7284",
"html_url": "https://github.com/huggingface/datasets/pull/7284",
"diff_url": "https://github.com/huggingface/datasets/pull/7284.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7284.patch",
"merged_at": "2024-11-21T16:09:47"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7283 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7283/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7283/comments | https://api.github.com/repos/huggingface/datasets/issues/7283/events | https://github.com/huggingface/datasets/pull/7283 | 2,642,537,708 | PR_kwDODunzps6BQUgH | 7,283 | Allow for variation in metadata file names as per issue #7123 | {
"login": "egrace479",
"id": 38985481,
"node_id": "MDQ6VXNlcjM4OTg1NDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/38985481?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/egrace479",
"html_url": "https://github.com/egrace479",
"followers_url": "https://api.github.com/users/egrace479/followers",
"following_url": "https://api.github.com/users/egrace479/following{/other_user}",
"gists_url": "https://api.github.com/users/egrace479/gists{/gist_id}",
"starred_url": "https://api.github.com/users/egrace479/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/egrace479/subscriptions",
"organizations_url": "https://api.github.com/users/egrace479/orgs",
"repos_url": "https://api.github.com/users/egrace479/repos",
"events_url": "https://api.github.com/users/egrace479/events{/privacy}",
"received_events_url": "https://api.github.com/users/egrace479/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-11-08T00:44:47 | 2024-11-08T00:44:47 | null | NONE | null | Allow metadata files to have an identifying preface. Specifically, it will recognize files with `-metadata.csv` or `_metadata.csv` as metadata files for the purposes of the dataset viewer functionality.
Resolves #7123. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7283/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7283/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7283",
"html_url": "https://github.com/huggingface/datasets/pull/7283",
"diff_url": "https://github.com/huggingface/datasets/pull/7283.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7283.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7282 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7282/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7282/comments | https://api.github.com/repos/huggingface/datasets/issues/7282/events | https://github.com/huggingface/datasets/issues/7282 | 2,642,075,491 | I_kwDODunzps6det9j | 7,282 | Faulty datasets.exceptions.ExpectedMoreSplitsError | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-11-07T20:15:01 | 2024-11-07T20:15:42 | null | CONTRIBUTOR | null | ### Describe the bug
Trying to download only the 'validation' split of my dataset; instead hit the error `datasets.exceptions.ExpectedMoreSplitsError`.
Appears to be the same undesired behavior as reported in [#6939](https://github.com/huggingface/datasets/issues/6939), but with `data_files`, not `data_dir`.
Here is the Traceback:
```
Traceback (most recent call last):
File "/home/user/app/app.py", line 12, in <module>
ds = load_dataset('datacomp/imagenet-1k-random0.0', token=GATED_IMAGENET, data_files={'validation': 'data/val*'}, split='validation', trust_remote_code=True)
File "/usr/local/lib/python3.10/site-packages/datasets/load.py", line 2154, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.10/site-packages/datasets/builder.py", line 924, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.10/site-packages/datasets/builder.py", line 1018, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/usr/local/lib/python3.10/site-packages/datasets/utils/info_utils.py", line 68, in verify_splits
raise ExpectedMoreSplitsError(str(set(expected_splits) - set(recorded_splits)))
datasets.exceptions.ExpectedMoreSplitsError: {'train', 'test'}
```
Note: I am using the `data_files` argument only because I am trying to specify that I only want the 'validation' split, and the whole dataset will be downloaded even when the `split='validation'` argument is specified, unless you also specify `data_files`, as described here: https://discuss.huggingface.co/t/how-can-i-download-a-specific-split-of-a-dataset/79027
### Steps to reproduce the bug
1. Create a Space with the default blank 'gradio' SDK https://huggingface.co/new-space
2. Create a file 'app.py' that loads a dataset to only extract a 'validation' split:
`ds = load_dataset('datacomp/imagenet-1k-random0.0', token=GATED_IMAGENET, data_files={'validation': 'data/val*'}, split='validation', trust_remote_code=True)`
### Expected behavior
Downloading validation split.
### Environment info
Default environment for creating a new Space. Relevant to this bug, that is:
```
FROM docker.io/library/python:3.10@sha256:fd0fa50d997eb56ce560c6e5ca6a1f5cf8fdff87572a16ac07fb1f5ca01eb608
--> RUN pip install --no-cache-dir pip==22.3.1 && pip install --no-cache-dir datasets "huggingface-hub>=0.19" "hf-transfer>=0.1.4" "protobuf<4" "click<8.1"
``` | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7282/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7282/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7281 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7281/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7281/comments | https://api.github.com/repos/huggingface/datasets/issues/7281/events | https://github.com/huggingface/datasets/issues/7281 | 2,640,346,339 | I_kwDODunzps6dYHzj | 7,281 | File not found error | {
"login": "MichielBontenbal",
"id": 37507786,
"node_id": "MDQ6VXNlcjM3NTA3Nzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/37507786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MichielBontenbal",
"html_url": "https://github.com/MichielBontenbal",
"followers_url": "https://api.github.com/users/MichielBontenbal/followers",
"following_url": "https://api.github.com/users/MichielBontenbal/following{/other_user}",
"gists_url": "https://api.github.com/users/MichielBontenbal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MichielBontenbal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MichielBontenbal/subscriptions",
"organizations_url": "https://api.github.com/users/MichielBontenbal/orgs",
"repos_url": "https://api.github.com/users/MichielBontenbal/repos",
"events_url": "https://api.github.com/users/MichielBontenbal/events{/privacy}",
"received_events_url": "https://api.github.com/users/MichielBontenbal/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-11-07T09:04:49 | 2024-11-07T09:22:43 | null | NONE | null | ### Describe the bug
I get a FileNotFoundError:
<img width="944" alt="image" src="https://github.com/user-attachments/assets/1336bc08-06f6-4682-a3c0-071ff65efa87">
### Steps to reproduce the bug
See screenshot.
### Expected behavior
I want to load one audiofile from the dataset.
### Environment info
MacOs Intel 14.6.1 (23G93)
Python 3.10.9
Numpy 1.23
Datasets latest version | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7281/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7281/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7280 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7280/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7280/comments | https://api.github.com/repos/huggingface/datasets/issues/7280/events | https://github.com/huggingface/datasets/issues/7280 | 2,639,977,077 | I_kwDODunzps6dWtp1 | 7,280 | Add filename in error message when ReadError or similar occur | {
"login": "elisa-aleman",
"id": 37046039,
"node_id": "MDQ6VXNlcjM3MDQ2MDM5",
"avatar_url": "https://avatars.githubusercontent.com/u/37046039?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elisa-aleman",
"html_url": "https://github.com/elisa-aleman",
"followers_url": "https://api.github.com/users/elisa-aleman/followers",
"following_url": "https://api.github.com/users/elisa-aleman/following{/other_user}",
"gists_url": "https://api.github.com/users/elisa-aleman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elisa-aleman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elisa-aleman/subscriptions",
"organizations_url": "https://api.github.com/users/elisa-aleman/orgs",
"repos_url": "https://api.github.com/users/elisa-aleman/repos",
"events_url": "https://api.github.com/users/elisa-aleman/events{/privacy}",
"received_events_url": "https://api.github.com/users/elisa-aleman/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 5 | 2024-11-07T06:00:53 | 2024-11-20T13:23:12 | null | NONE | null | Please update error messages to include relevant information for debugging when loading datasets with `load_dataset()` that may have a few corrupted files.
Whenever downloading a full dataset, some files might be corrupted (either at the source or from downloading corruption).
However the errors often only let me know it was a tar file if `tarfile.ReadError` appears on the traceback, and I imagine similarly for other file types.
This makes it really hard to debug which file is corrupted, and when dealing with very large datasets, it shouldn't be necessary to force download everything again. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7280/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7280/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7279 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7279/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7279/comments | https://api.github.com/repos/huggingface/datasets/issues/7279/events | https://github.com/huggingface/datasets/pull/7279 | 2,635,813,932 | PR_kwDODunzps6A8pTJ | 7,279 | Feature proposal: Stacking, potentially heterogeneous, datasets | {
"login": "TimCares",
"id": 96243987,
"node_id": "U_kgDOBbyREw",
"avatar_url": "https://avatars.githubusercontent.com/u/96243987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TimCares",
"html_url": "https://github.com/TimCares",
"followers_url": "https://api.github.com/users/TimCares/followers",
"following_url": "https://api.github.com/users/TimCares/following{/other_user}",
"gists_url": "https://api.github.com/users/TimCares/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TimCares/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TimCares/subscriptions",
"organizations_url": "https://api.github.com/users/TimCares/orgs",
"repos_url": "https://api.github.com/users/TimCares/repos",
"events_url": "https://api.github.com/users/TimCares/events{/privacy}",
"received_events_url": "https://api.github.com/users/TimCares/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-11-05T15:40:50 | 2024-11-05T15:40:50 | null | NONE | null | ### Introduction
Hello there,
I noticed that there are two ways to combine multiple datasets: Either through `datasets.concatenate_datasets` or `datasets.interleave_datasets`. However, to my knowledge (please correct me if I am wrong) both approaches require the datasets that are combined to have the same features.
I think it would be a great idea to add support for combining multiple datasets that might not follow the same schema (i.e. have different features), for example an image and text dataset. That is why I propose a third function of the `datasets.combine` module called `stack_datasets`, which can be used to combine a list of datasets with (potentially) different features. This would look as follows:
```python
>>> from datasets import stack_datasets
>>> image_dataset = ...
>>> next(iter(image_dataset))
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=555x416 at 0x313E79CD0> }
>>> text_dataset = ...
>>> next(iter(text_dataset))
{'text': "This is a test."}
>>> stacked = stack_datasets(datasets={'i_ds': image_dataset, 't_ds': text_dataset}, stopping_strategy='all_exhausted')
>>> next(iter(stacked))
{
'i_ds': {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=555x416 at 0x313E79CD0> }
't_ds': {'text': "This is a test."}
}
```
<br />
### Motivation
I motivate this by:
**A**: The fact that Pytorch offers a similar functionality under `torch.utils.data.StackDataset` ([link](https://pytorch.org/docs/stable/data.html#torch.utils.data.StackDataset)).
**B**: In settings where one would like to e.g. train a Vision-Language model using an image-text dataset, an image dataset, and a text dataset, this functionality would offer a clean and intuitive solution to create multimodal datasets. I am aware that the aforementioned is also feasible without my proposed function, but I believe this offers a nice approach that aligns with existing functionality and is directly provided within the `datasets` package.
### API
`stack_datasets` has two arguments: `datasets` and `stopping_strategy `.
<br />
`datasets` is a dictionary of either type `Dict[str, Dataset]` or `Dict[str, IterableDatasets]`, a mixture is not allowed. It contains the names of the datasets (the keys) and the datasets themselves (the values) that should be stacked. Each item returned is a dictionary with one key-value pair for each dataset. The keys are the names of the datasets as provided in the argument `datasets`, and the values are the respective examples from the datasets.
<br />
`stopping_strategy` is the same as for `interleave_datasets`. If it is `first_exhausted` we stop if the smallest dataset runs out of examples, if it is `all_exhausted` we stop if all datasets ran out of examples at least once. For `all_exhausted` that means that we may visit examples from datasets multiple times.
### Docs
I saw that there are multiple documentations and guides on the HuggingFace website that introduce `concatenate_datasets` and `interleave_datasets`, for example [here](https://huggingface.co/docs/datasets/process#concatenate). If this request is merged I would be willing to add the new functionality at the appropriate points in the documentation (if desired).
### Tests
I also added some tests to ensure correctness. Some tests I wrote in [tests/test_iterable_dataset.py](https://github.com/TimCares/datasets/blob/fadc1159debf2a65d44e40cbf7758f2bd2cc8b08/tests/test_iterable_dataset.py#L2169)
run for both `Dataset` and `IterableDataset` even though tests for `Dataset` technically do not belong in this script, but I found that this was a nice way to cover more cases with mostly the same code.
### Additional information
I tried to write the code in a way so that it is similar to that of `concatenate_datasets` and `interleave_datasets`.
I’m open to feedback and willing to make adjustments based on your suggestions, so feel free to give me your take. :)
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7279/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7279/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7279",
"html_url": "https://github.com/huggingface/datasets/pull/7279",
"diff_url": "https://github.com/huggingface/datasets/pull/7279.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7279.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7278 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7278/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7278/comments | https://api.github.com/repos/huggingface/datasets/issues/7278/events | https://github.com/huggingface/datasets/pull/7278 | 2,633,436,151 | PR_kwDODunzps6A1ORG | 7,278 | Let soundfile directly read local audio files | {
"login": "fawazahmed0",
"id": 20347013,
"node_id": "MDQ6VXNlcjIwMzQ3MDEz",
"avatar_url": "https://avatars.githubusercontent.com/u/20347013?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fawazahmed0",
"html_url": "https://github.com/fawazahmed0",
"followers_url": "https://api.github.com/users/fawazahmed0/followers",
"following_url": "https://api.github.com/users/fawazahmed0/following{/other_user}",
"gists_url": "https://api.github.com/users/fawazahmed0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fawazahmed0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fawazahmed0/subscriptions",
"organizations_url": "https://api.github.com/users/fawazahmed0/orgs",
"repos_url": "https://api.github.com/users/fawazahmed0/repos",
"events_url": "https://api.github.com/users/fawazahmed0/events{/privacy}",
"received_events_url": "https://api.github.com/users/fawazahmed0/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-11-04T17:41:13 | 2024-11-18T14:01:25 | null | NONE | null | - [x] Fixes #7276 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7278/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7278/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7278",
"html_url": "https://github.com/huggingface/datasets/pull/7278",
"diff_url": "https://github.com/huggingface/datasets/pull/7278.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7278.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7277 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7277/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7277/comments | https://api.github.com/repos/huggingface/datasets/issues/7277/events | https://github.com/huggingface/datasets/pull/7277 | 2,632,459,184 | PR_kwDODunzps6AyB7O | 7,277 | Add link to video dataset | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-11-04T10:45:12 | 2024-11-04T17:05:06 | 2024-11-04T17:05:06 | CONTRIBUTOR | null | This PR updates https://huggingface.co/docs/datasets/loading to also link to the new video loading docs.
cc @mfarre | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7277/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7277/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7277",
"html_url": "https://github.com/huggingface/datasets/pull/7277",
"diff_url": "https://github.com/huggingface/datasets/pull/7277.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7277.patch",
"merged_at": "2024-11-04T17:05:06"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7276 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7276/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7276/comments | https://api.github.com/repos/huggingface/datasets/issues/7276/events | https://github.com/huggingface/datasets/issues/7276 | 2,631,917,431 | I_kwDODunzps6c3993 | 7,276 | Accessing audio dataset value throws Format not recognised error | {
"login": "fawazahmed0",
"id": 20347013,
"node_id": "MDQ6VXNlcjIwMzQ3MDEz",
"avatar_url": "https://avatars.githubusercontent.com/u/20347013?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fawazahmed0",
"html_url": "https://github.com/fawazahmed0",
"followers_url": "https://api.github.com/users/fawazahmed0/followers",
"following_url": "https://api.github.com/users/fawazahmed0/following{/other_user}",
"gists_url": "https://api.github.com/users/fawazahmed0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fawazahmed0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fawazahmed0/subscriptions",
"organizations_url": "https://api.github.com/users/fawazahmed0/orgs",
"repos_url": "https://api.github.com/users/fawazahmed0/repos",
"events_url": "https://api.github.com/users/fawazahmed0/events{/privacy}",
"received_events_url": "https://api.github.com/users/fawazahmed0/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2024-11-04T05:59:13 | 2024-11-09T18:51:52 | null | NONE | null | ### Describe the bug
Accessing audio dataset value throws `Format not recognised error`
### Steps to reproduce the bug
**code:**
```py
from datasets import load_dataset
dataset = load_dataset("fawazahmed0/bug-audio")
for data in dataset["train"]:
print(data)
```
**output:**
```bash
(mypy) C:\Users\Nawaz-Server\Documents\ml>python myest.py
[C:\vcpkg\buildtrees\mpg123\src\0d8db63f9b-3db975bc05.clean\src\libmpg123\layer3.c:INT123_do_layer3():1801] error: dequantization failed!
{'audio': {'path': 'C:\\Users\\Nawaz-Server\\.cache\\huggingface\\hub\\datasets--fawazahmed0--bug-audio\\snapshots\\fab1398431fed1c0a2a7bff0945465bab8b5daef\\data\\Ghamadi\\037135.mp3', 'array': array([ 0.00000000e+00, -2.86519935e-22, -2.56504911e-21, ...,
-1.94239747e-02, -2.42924765e-02, -2.99104657e-02]), 'sampling_rate': 22050}, 'reciter': 'Ghamadi', 'transcription': 'الا عجوز ا في الغبرين', 'line': 3923, 'chapter': 37, 'verse': 135, 'text': 'إِلَّا عَجُوزࣰ ا فِي ٱلۡغَٰبِرِينَ'}
Traceback (most recent call last):
File "C:\Users\Nawaz-Server\Documents\ml\myest.py", line 5, in <module>
for data in dataset["train"]:
~~~~~~~^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\arrow_dataset.py", line 2372, in __iter__
formatted_output = format_table(
^^^^^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\formatting\formatting.py", line 639, in format_table
return formatter(pa_table, query_type=query_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\formatting\formatting.py", line 403, in __call__
return self.format_row(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\formatting\formatting.py", line 444, in format_row
row = self.python_features_decoder.decode_row(row)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\formatting\formatting.py", line 222, in decode_row
return self.features.decode_example(row) if self.features else row
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\features\features.py", line 2042, in decode_example
column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\features\features.py", line 1403, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\features\audio.py", line 184, in decode_example
array, sampling_rate = sf.read(f)
^^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\soundfile.py", line 285, in read
with SoundFile(file, 'r', samplerate, channels,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\soundfile.py", line 658, in __init__
self._file = self._open(file, mode_int, closefd)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\soundfile.py", line 1216, in _open
raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name))
soundfile.LibsndfileError: Error opening <_io.BufferedReader name='C:\\Users\\Nawaz-Server\\.cache\\huggingface\\hub\\datasets--fawazahmed0--bug-audio\\snapshots\\fab1398431fed1c0a2a7bff0945465bab8b5daef\\data\\Ghamadi\\037136.mp3'>: Format not recognised.
```
### Expected behavior
Everything should work fine, as loading the problematic audio file directly with soundfile package works fine
**code:**
```
import soundfile as sf
print(sf.read('C:\\Users\\Nawaz-Server\\.cache\\huggingface\\hub\\datasets--fawazahmed0--bug-audio\\snapshots\\fab1398431fed1c0a2a7bff0945465bab8b5daef\\data\\Ghamadi\\037136.mp3'))
```
**output:**
```bash
(mypy) C:\Users\Nawaz-Server\Documents\ml>python myest.py
[C:\vcpkg\buildtrees\mpg123\src\0d8db63f9b-3db975bc05.clean\src\libmpg123\layer3.c:INT123_do_layer3():1801] error: dequantization failed!
(array([ 0.00000000e+00, -8.43723821e-22, -2.45370628e-22, ...,
-7.71464454e-03, -6.90496899e-03, -8.63333419e-03]), 22050)
```
### Environment info
- `datasets` version: 3.0.2
- Platform: Windows-11-10.0.22621-SP0
- Python version: 3.12.7
- `huggingface_hub` version: 0.26.2
- PyArrow version: 17.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.10.0
- soundfile: 0.12.1 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7276/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7276/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7275 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7275/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7275/comments | https://api.github.com/repos/huggingface/datasets/issues/7275/events | https://github.com/huggingface/datasets/issues/7275 | 2,631,713,397 | I_kwDODunzps6c3MJ1 | 7,275 | load_dataset | {
"login": "santiagobp99",
"id": 46941974,
"node_id": "MDQ6VXNlcjQ2OTQxOTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/46941974?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/santiagobp99",
"html_url": "https://github.com/santiagobp99",
"followers_url": "https://api.github.com/users/santiagobp99/followers",
"following_url": "https://api.github.com/users/santiagobp99/following{/other_user}",
"gists_url": "https://api.github.com/users/santiagobp99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/santiagobp99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/santiagobp99/subscriptions",
"organizations_url": "https://api.github.com/users/santiagobp99/orgs",
"repos_url": "https://api.github.com/users/santiagobp99/repos",
"events_url": "https://api.github.com/users/santiagobp99/events{/privacy}",
"received_events_url": "https://api.github.com/users/santiagobp99/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-11-04T03:01:44 | 2024-11-04T03:01:44 | null | NONE | null | ### Describe the bug
I am performing two operations I see on a hugging face tutorial (Fine-tune a language model), and I am defining every aspect inside the mapped functions, also some imports of the library because it doesnt identify anything not defined outside that function where the dataset elements are being mapped:
https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb#scrollTo=iaAJy5Hu3l_B
`- lm_datasets = tokenized_datasets.map(
group_texts,
batched=True,
batch_size=batch_size,
num_proc=4,
)
- tokenized_datasets = datasets.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"])
def tokenize_function(examples):
model_checkpoint = 'gpt2'
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True)
return tokenizer(examples["text"])`
### Steps to reproduce the bug
Currently handle all the imports inside the function
### Expected behavior
The code must work es expected in the notebook, but currently this is not happening.
https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb#scrollTo=iaAJy5Hu3l_B
### Environment info
print(transformers.__version__)
4.46.1 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7275/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7275/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7274 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7274/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7274/comments | https://api.github.com/repos/huggingface/datasets/issues/7274/events | https://github.com/huggingface/datasets/pull/7274 | 2,629,882,821 | PR_kwDODunzps6ArEt- | 7,274 | [MINOR:TYPO] Fix typo in exception text | {
"login": "cakiki",
"id": 3664563,
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cakiki",
"html_url": "https://github.com/cakiki",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"repos_url": "https://api.github.com/users/cakiki/repos",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-11-01T21:15:29 | 2024-11-01T21:15:54 | null | CONTRIBUTOR | null | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7274/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7274/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7274",
"html_url": "https://github.com/huggingface/datasets/pull/7274",
"diff_url": "https://github.com/huggingface/datasets/pull/7274.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7274.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7273 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7273/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7273/comments | https://api.github.com/repos/huggingface/datasets/issues/7273/events | https://github.com/huggingface/datasets/pull/7273 | 2,628,896,492 | PR_kwDODunzps6An6n8 | 7,273 | Raise error for incorrect JSON serialization | {
"login": "varadhbhatnagar",
"id": 20443618,
"node_id": "MDQ6VXNlcjIwNDQzNjE4",
"avatar_url": "https://avatars.githubusercontent.com/u/20443618?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/varadhbhatnagar",
"html_url": "https://github.com/varadhbhatnagar",
"followers_url": "https://api.github.com/users/varadhbhatnagar/followers",
"following_url": "https://api.github.com/users/varadhbhatnagar/following{/other_user}",
"gists_url": "https://api.github.com/users/varadhbhatnagar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/varadhbhatnagar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/varadhbhatnagar/subscriptions",
"organizations_url": "https://api.github.com/users/varadhbhatnagar/orgs",
"repos_url": "https://api.github.com/users/varadhbhatnagar/repos",
"events_url": "https://api.github.com/users/varadhbhatnagar/events{/privacy}",
"received_events_url": "https://api.github.com/users/varadhbhatnagar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-11-01T11:54:35 | 2024-11-18T11:25:01 | 2024-11-18T11:25:01 | CONTRIBUTOR | null | Raise error when `lines = False` and `batch_size < Dataset.num_rows` in `Dataset.to_json()`.
Issue: #7037
Related PRs:
#7039 #7181 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7273/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7273/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7273",
"html_url": "https://github.com/huggingface/datasets/pull/7273",
"diff_url": "https://github.com/huggingface/datasets/pull/7273.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7273.patch",
"merged_at": "2024-11-18T11:25:01"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7272 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7272/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7272/comments | https://api.github.com/repos/huggingface/datasets/issues/7272/events | https://github.com/huggingface/datasets/pull/7272 | 2,627,223,390 | PR_kwDODunzps6AirL2 | 7,272 | fix conda release worlflow | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-10-31T15:56:19 | 2024-10-31T15:58:35 | 2024-10-31T15:57:29 | MEMBER | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7272/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7272/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7272",
"html_url": "https://github.com/huggingface/datasets/pull/7272",
"diff_url": "https://github.com/huggingface/datasets/pull/7272.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7272.patch",
"merged_at": "2024-10-31T15:57:29"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7271 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7271/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7271/comments | https://api.github.com/repos/huggingface/datasets/issues/7271/events | https://github.com/huggingface/datasets/pull/7271 | 2,627,135,540 | PR_kwDODunzps6AiZaj | 7,271 | Set dev version | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-10-31T15:22:51 | 2024-10-31T15:25:27 | 2024-10-31T15:22:59 | MEMBER | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7271/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7271/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7271",
"html_url": "https://github.com/huggingface/datasets/pull/7271",
"diff_url": "https://github.com/huggingface/datasets/pull/7271.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7271.patch",
"merged_at": "2024-10-31T15:22:59"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7270 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7270/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7270/comments | https://api.github.com/repos/huggingface/datasets/issues/7270/events | https://github.com/huggingface/datasets/pull/7270 | 2,627,107,016 | PR_kwDODunzps6AiTJm | 7,270 | Release: 3.1.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-10-31T15:10:01 | 2024-10-31T15:14:23 | 2024-10-31T15:14:20 | MEMBER | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7270/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7270/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7270",
"html_url": "https://github.com/huggingface/datasets/pull/7270",
"diff_url": "https://github.com/huggingface/datasets/pull/7270.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7270.patch",
"merged_at": "2024-10-31T15:14:20"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7269 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7269/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7269/comments | https://api.github.com/repos/huggingface/datasets/issues/7269/events | https://github.com/huggingface/datasets/issues/7269 | 2,626,873,843 | I_kwDODunzps6ckunz | 7,269 | Memory leak when streaming | {
"login": "Jourdelune",
"id": 64205064,
"node_id": "MDQ6VXNlcjY0MjA1MDY0",
"avatar_url": "https://avatars.githubusercontent.com/u/64205064?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jourdelune",
"html_url": "https://github.com/Jourdelune",
"followers_url": "https://api.github.com/users/Jourdelune/followers",
"following_url": "https://api.github.com/users/Jourdelune/following{/other_user}",
"gists_url": "https://api.github.com/users/Jourdelune/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jourdelune/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jourdelune/subscriptions",
"organizations_url": "https://api.github.com/users/Jourdelune/orgs",
"repos_url": "https://api.github.com/users/Jourdelune/repos",
"events_url": "https://api.github.com/users/Jourdelune/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jourdelune/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-10-31T13:33:52 | 2024-11-18T11:46:07 | null | NONE | null | ### Describe the bug
I try to use a dataset with streaming=True, the issue I have is that the RAM usage becomes higher and higher until it is no longer sustainable.
I understand that huggingface store data in ram during the streaming, and more worker in dataloader there are, more a lot of shard will be stored in ram, but the issue I have is that the ram usage is not constant. So after each new shard loaded, the ram usage will be higher and higher.
### Steps to reproduce the bug
You can run this code and see you ram usage, after each shard of 255 examples, your ram usage will be extended.
```py
from datasets import load_dataset
from torch.utils.data import DataLoader
dataset = load_dataset("WaveGenAI/dataset", streaming=True)
dataloader = DataLoader(dataset["train"], num_workers=3)
for i, data in enumerate(dataloader):
print(i, end="\r")
```
### Expected behavior
The Ram usage should be always the same (just 3 shards loaded in the ram).
### Environment info
- `datasets` version: 3.0.1
- Platform: Linux-6.10.5-arch1-1-x86_64-with-glibc2.40
- Python version: 3.12.4
- `huggingface_hub` version: 0.26.0
- PyArrow version: 17.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.6.1 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7269/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7269/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7268 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7268/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7268/comments | https://api.github.com/repos/huggingface/datasets/issues/7268/events | https://github.com/huggingface/datasets/issues/7268 | 2,626,664,687 | I_kwDODunzps6cj7jv | 7,268 | load_from_disk | {
"login": "ghaith-mq",
"id": 71670961,
"node_id": "MDQ6VXNlcjcxNjcwOTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/71670961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghaith-mq",
"html_url": "https://github.com/ghaith-mq",
"followers_url": "https://api.github.com/users/ghaith-mq/followers",
"following_url": "https://api.github.com/users/ghaith-mq/following{/other_user}",
"gists_url": "https://api.github.com/users/ghaith-mq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghaith-mq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghaith-mq/subscriptions",
"organizations_url": "https://api.github.com/users/ghaith-mq/orgs",
"repos_url": "https://api.github.com/users/ghaith-mq/repos",
"events_url": "https://api.github.com/users/ghaith-mq/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghaith-mq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-10-31T11:51:56 | 2024-10-31T14:43:47 | null | NONE | null | ### Describe the bug
I have data saved with save_to_disk. The data is big (700Gb). When I try loading it, the only option is load_from_disk, and this function copies the data to a tmp directory, causing me to run out of disk space. Is there an alternative solution to that?
### Steps to reproduce the bug
when trying to load data using load_From_disk after being saved using save_to_disk
### Expected behavior
run out of disk space
### Environment info
lateest version | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7268/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7268/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7267 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7267/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7267/comments | https://api.github.com/repos/huggingface/datasets/issues/7267/events | https://github.com/huggingface/datasets/issues/7267 | 2,626,490,029 | I_kwDODunzps6cjQ6t | 7,267 | Source installation fails on Macintosh with python 3.10 | {
"login": "mayankagarwals",
"id": 39498938,
"node_id": "MDQ6VXNlcjM5NDk4OTM4",
"avatar_url": "https://avatars.githubusercontent.com/u/39498938?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mayankagarwals",
"html_url": "https://github.com/mayankagarwals",
"followers_url": "https://api.github.com/users/mayankagarwals/followers",
"following_url": "https://api.github.com/users/mayankagarwals/following{/other_user}",
"gists_url": "https://api.github.com/users/mayankagarwals/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mayankagarwals/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mayankagarwals/subscriptions",
"organizations_url": "https://api.github.com/users/mayankagarwals/orgs",
"repos_url": "https://api.github.com/users/mayankagarwals/repos",
"events_url": "https://api.github.com/users/mayankagarwals/events{/privacy}",
"received_events_url": "https://api.github.com/users/mayankagarwals/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-10-31T10:18:45 | 2024-11-04T22:18:06 | null | NONE | null | ### Describe the bug
Hi,
Decord is a dev dependency not maintained since couple years.
It does not have an ARM package available rendering it uninstallable on non-intel based macs
Suggestion is to move to eva-decord (https://github.com/georgia-tech-db/eva-decord) which doesnt have this problem.
Happy to raise a PR
### Steps to reproduce the bug
Source installation as mentioned in contributinog.md
### Expected behavior
Installation without decord failing to be installed.
### Environment info
python=3.10, M3 Mac | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7267/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7267/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7266 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7266/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7266/comments | https://api.github.com/repos/huggingface/datasets/issues/7266/events | https://github.com/huggingface/datasets/issues/7266 | 2,624,666,087 | I_kwDODunzps6ccTnn | 7,266 | The dataset viewer should be available soon. Please retry later. | {
"login": "viiika",
"id": 39821659,
"node_id": "MDQ6VXNlcjM5ODIxNjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/39821659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/viiika",
"html_url": "https://github.com/viiika",
"followers_url": "https://api.github.com/users/viiika/followers",
"following_url": "https://api.github.com/users/viiika/following{/other_user}",
"gists_url": "https://api.github.com/users/viiika/gists{/gist_id}",
"starred_url": "https://api.github.com/users/viiika/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/viiika/subscriptions",
"organizations_url": "https://api.github.com/users/viiika/orgs",
"repos_url": "https://api.github.com/users/viiika/repos",
"events_url": "https://api.github.com/users/viiika/events{/privacy}",
"received_events_url": "https://api.github.com/users/viiika/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-10-30T16:32:00 | 2024-10-31T03:48:11 | 2024-10-31T03:48:10 | NONE | null | ### Describe the bug
After waiting for 2 hours, it still presents ``The dataset viewer should be available soon. Please retry later.''
### Steps to reproduce the bug
dataset link: https://huggingface.co/datasets/BryanW/HI_EDIT
### Expected behavior
Present the dataset viewer.
### Environment info
NA | {
"login": "viiika",
"id": 39821659,
"node_id": "MDQ6VXNlcjM5ODIxNjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/39821659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/viiika",
"html_url": "https://github.com/viiika",
"followers_url": "https://api.github.com/users/viiika/followers",
"following_url": "https://api.github.com/users/viiika/following{/other_user}",
"gists_url": "https://api.github.com/users/viiika/gists{/gist_id}",
"starred_url": "https://api.github.com/users/viiika/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/viiika/subscriptions",
"organizations_url": "https://api.github.com/users/viiika/orgs",
"repos_url": "https://api.github.com/users/viiika/repos",
"events_url": "https://api.github.com/users/viiika/events{/privacy}",
"received_events_url": "https://api.github.com/users/viiika/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7266/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7266/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7265 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7265/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7265/comments | https://api.github.com/repos/huggingface/datasets/issues/7265/events | https://github.com/huggingface/datasets/pull/7265 | 2,624,090,418 | PR_kwDODunzps6AYofJ | 7,265 | Disallow video push_to_hub | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-10-30T13:21:55 | 2024-10-30T13:36:05 | 2024-10-30T13:36:02 | MEMBER | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7265/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7265/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7265",
"html_url": "https://github.com/huggingface/datasets/pull/7265",
"diff_url": "https://github.com/huggingface/datasets/pull/7265.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7265.patch",
"merged_at": "2024-10-30T13:36:02"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7264 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7264/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7264/comments | https://api.github.com/repos/huggingface/datasets/issues/7264/events | https://github.com/huggingface/datasets/pull/7264 | 2,624,047,640 | PR_kwDODunzps6AYfwL | 7,264 | fix docs relative links | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-10-30T13:07:34 | 2024-10-30T13:10:13 | 2024-10-30T13:09:02 | MEMBER | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7264/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7264/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7264",
"html_url": "https://github.com/huggingface/datasets/pull/7264",
"diff_url": "https://github.com/huggingface/datasets/pull/7264.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7264.patch",
"merged_at": "2024-10-30T13:09:02"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7263 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7263/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7263/comments | https://api.github.com/repos/huggingface/datasets/issues/7263/events | https://github.com/huggingface/datasets/pull/7263 | 2,621,844,054 | PR_kwDODunzps6ARg7m | 7,263 | Small addition to video docs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-10-29T16:58:37 | 2024-10-29T17:01:05 | 2024-10-29T16:59:10 | MEMBER | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7263/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7263/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7263",
"html_url": "https://github.com/huggingface/datasets/pull/7263",
"diff_url": "https://github.com/huggingface/datasets/pull/7263.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7263.patch",
"merged_at": "2024-10-29T16:59:10"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7262 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7262/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7262/comments | https://api.github.com/repos/huggingface/datasets/issues/7262/events | https://github.com/huggingface/datasets/pull/7262 | 2,620,879,059 | PR_kwDODunzps6AOWI8 | 7,262 | Allow video with disabeld decoding without decord | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-10-29T10:54:04 | 2024-10-29T10:56:19 | 2024-10-29T10:55:37 | MEMBER | null | for the viewer, this way it can use Video(decode=False) and doesn't need decord (which causes segfaults) | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7262/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7262/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7262",
"html_url": "https://github.com/huggingface/datasets/pull/7262",
"diff_url": "https://github.com/huggingface/datasets/pull/7262.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7262.patch",
"merged_at": "2024-10-29T10:55:37"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7261 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7261/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7261/comments | https://api.github.com/repos/huggingface/datasets/issues/7261/events | https://github.com/huggingface/datasets/issues/7261 | 2,620,510,840 | I_kwDODunzps6cMdJ4 | 7,261 | Cannot load the cache when mapping the dataset | {
"login": "zhangn77",
"id": 43033959,
"node_id": "MDQ6VXNlcjQzMDMzOTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/43033959?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhangn77",
"html_url": "https://github.com/zhangn77",
"followers_url": "https://api.github.com/users/zhangn77/followers",
"following_url": "https://api.github.com/users/zhangn77/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangn77/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhangn77/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangn77/subscriptions",
"organizations_url": "https://api.github.com/users/zhangn77/orgs",
"repos_url": "https://api.github.com/users/zhangn77/repos",
"events_url": "https://api.github.com/users/zhangn77/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhangn77/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-10-29T08:29:40 | 2024-10-29T08:29:40 | null | NONE | null | ### Describe the bug
I'm training the flux controlnet. The train_dataset.map() takes long time to finish. However, when I killed one training process and want to restart a new training with the same dataset. I can't reuse the mapped result even I defined the cache dir for the dataset.
with accelerator.main_process_first():
from datasets.fingerprint import Hasher
# fingerprint used by the cache for the other processes to load the result
# details: https://github.com/huggingface/diffusers/pull/4038#discussion_r1266078401
new_fingerprint = Hasher.hash(args)
train_dataset = train_dataset.map(
compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint, batch_size=10,
)
### Steps to reproduce the bug
train flux controlnet and start again
### Expected behavior
will not map again
### Environment info
latest diffusers
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7261/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7261/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7260 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7260/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7260/comments | https://api.github.com/repos/huggingface/datasets/issues/7260/events | https://github.com/huggingface/datasets/issues/7260 | 2,620,014,285 | I_kwDODunzps6cKj7N | 7,260 | cache can't cleaned or disabled | {
"login": "charliedream1",
"id": 15007828,
"node_id": "MDQ6VXNlcjE1MDA3ODI4",
"avatar_url": "https://avatars.githubusercontent.com/u/15007828?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/charliedream1",
"html_url": "https://github.com/charliedream1",
"followers_url": "https://api.github.com/users/charliedream1/followers",
"following_url": "https://api.github.com/users/charliedream1/following{/other_user}",
"gists_url": "https://api.github.com/users/charliedream1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/charliedream1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/charliedream1/subscriptions",
"organizations_url": "https://api.github.com/users/charliedream1/orgs",
"repos_url": "https://api.github.com/users/charliedream1/repos",
"events_url": "https://api.github.com/users/charliedream1/events{/privacy}",
"received_events_url": "https://api.github.com/users/charliedream1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-10-29T03:15:28 | 2024-12-11T09:04:52 | null | NONE | null | ### Describe the bug
I tried following ways, the cache can't be disabled.
I got 2T data, but I also got more than 2T cache file. I got pressure on storage. I need to diable cache or cleaned immediately after processed. Following ways are all not working, please give some help!
```python
from datasets import disable_caching
from transformers import AutoTokenizer
disable_caching()
tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_path)
def tokenization_fn(examples):
column_name = 'text' if 'text' in examples else 'data'
tokenized_inputs = tokenizer(
examples[column_name], return_special_tokens_mask=True, truncation=False,
max_length=tokenizer.model_max_length
)
return tokenized_inputs
data = load_dataset('json', data_files=save_local_path, split='train', cache_dir=None)
data.cleanup_cache_files()
updated_dataset = data.map(tokenization_fn, load_from_cache_file=False)
updated_dataset .cleanup_cache_files()
```
### Expected behavior
no cache file generated
### Environment info
Ubuntu 20.04.6 LTS
datasets 3.0.2 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7260/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7260/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7259 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7259/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7259/comments | https://api.github.com/repos/huggingface/datasets/issues/7259/events | https://github.com/huggingface/datasets/pull/7259 | 2,618,909,241 | PR_kwDODunzps6AIEY- | 7,259 | Don't embed videos | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-10-28T16:25:10 | 2024-10-28T16:27:34 | 2024-10-28T16:26:01 | MEMBER | null | don't include video bytes when running download_and_prepare(format="parquet")
this also affects push_to_hub which will just upload the local paths of the videos though | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7259/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7259/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7259",
"html_url": "https://github.com/huggingface/datasets/pull/7259",
"diff_url": "https://github.com/huggingface/datasets/pull/7259.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7259.patch",
"merged_at": "2024-10-28T16:26:01"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7258 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7258/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7258/comments | https://api.github.com/repos/huggingface/datasets/issues/7258/events | https://github.com/huggingface/datasets/pull/7258 | 2,618,758,399 | PR_kwDODunzps6AHlK1 | 7,258 | Always set non-null writer batch size | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-10-28T15:26:14 | 2024-10-28T15:28:41 | 2024-10-28T15:26:29 | MEMBER | null | bug introduced in #7230, it was preventing the Viewer limit writes to work | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7258/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7258/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7258",
"html_url": "https://github.com/huggingface/datasets/pull/7258",
"diff_url": "https://github.com/huggingface/datasets/pull/7258.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7258.patch",
"merged_at": "2024-10-28T15:26:29"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7257 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7257/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7257/comments | https://api.github.com/repos/huggingface/datasets/issues/7257/events | https://github.com/huggingface/datasets/pull/7257 | 2,618,602,173 | PR_kwDODunzps6AHEfy | 7,257 | fix ci for pyarrow 18 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-10-28T14:31:34 | 2024-10-28T14:34:05 | 2024-10-28T14:31:44 | MEMBER | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7257/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7257/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7257",
"html_url": "https://github.com/huggingface/datasets/pull/7257",
"diff_url": "https://github.com/huggingface/datasets/pull/7257.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7257.patch",
"merged_at": "2024-10-28T14:31:44"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7256 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7256/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7256/comments | https://api.github.com/repos/huggingface/datasets/issues/7256/events | https://github.com/huggingface/datasets/pull/7256 | 2,618,580,188 | PR_kwDODunzps6AG_qk | 7,256 | Retry all requests timeouts | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-10-28T14:23:16 | 2024-10-28T14:56:28 | 2024-10-28T14:56:26 | MEMBER | null | as reported in https://github.com/huggingface/datasets/issues/6843 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7256/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7256/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7256",
"html_url": "https://github.com/huggingface/datasets/pull/7256",
"diff_url": "https://github.com/huggingface/datasets/pull/7256.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7256.patch",
"merged_at": "2024-10-28T14:56:26"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7255 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7255/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7255/comments | https://api.github.com/repos/huggingface/datasets/issues/7255/events | https://github.com/huggingface/datasets/pull/7255 | 2,618,540,355 | PR_kwDODunzps6AG25R | 7,255 | fix decord import | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-10-28T14:08:19 | 2024-10-28T14:10:43 | 2024-10-28T14:09:14 | MEMBER | null | delay the import until Video() is instantiated + also import duckdb first (otherwise importing duckdb later causes a segfault) | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7255/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7255/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7255",
"html_url": "https://github.com/huggingface/datasets/pull/7255",
"diff_url": "https://github.com/huggingface/datasets/pull/7255.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7255.patch",
"merged_at": "2024-10-28T14:09:14"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7254 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7254/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7254/comments | https://api.github.com/repos/huggingface/datasets/issues/7254/events | https://github.com/huggingface/datasets/issues/7254 | 2,616,174,996 | I_kwDODunzps6b76mU | 7,254 | mismatch for datatypes when providing `Features` with `Array2D` and user specified `dtype` and using with_format("numpy") | {
"login": "Akhil-CM",
"id": 97193607,
"node_id": "U_kgDOBcsOhw",
"avatar_url": "https://avatars.githubusercontent.com/u/97193607?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Akhil-CM",
"html_url": "https://github.com/Akhil-CM",
"followers_url": "https://api.github.com/users/Akhil-CM/followers",
"following_url": "https://api.github.com/users/Akhil-CM/following{/other_user}",
"gists_url": "https://api.github.com/users/Akhil-CM/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Akhil-CM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Akhil-CM/subscriptions",
"organizations_url": "https://api.github.com/users/Akhil-CM/orgs",
"repos_url": "https://api.github.com/users/Akhil-CM/repos",
"events_url": "https://api.github.com/users/Akhil-CM/events{/privacy}",
"received_events_url": "https://api.github.com/users/Akhil-CM/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-10-26T22:06:27 | 2024-10-26T22:07:37 | null | NONE | null | ### Describe the bug
If the user provides a `Features` type value to `datasets.Dataset` with members having `Array2D` with a value for `dtype`, it is not respected during `with_format("numpy")` which should return a `np.array` with `dtype` that the user provided for `Array2D`. It seems for floats, it will be set to `float32` and for ints it will be set to `int64`
### Steps to reproduce the bug
```python
import numpy as np
import datasets
from datasets import Dataset, Features, Array2D
print(f"datasets version: {datasets.__version__}")
data_info = {
"arr_float" : "float64",
"arr_int" : "int32"
}
sample = {key : [np.zeros([4, 5], dtype=dtype)] for key, dtype in data_info.items()}
features = {key : Array2D(shape=(None, 5), dtype=dtype) for key, dtype in data_info.items()}
features = Features(features)
dataset = Dataset.from_dict(sample, features=features)
ds = dataset.with_format("numpy")
for key in features:
print(f"{key} feature dtype: ", ds.features[key].dtype)
print(f"{key} dtype:", ds[key].dtype)
```
Output:
```bash
datasets version: 3.0.2
arr_float feature dtype: float64
arr_float dtype: float32
arr_int feature dtype: int32
arr_int dtype: int64
```
### Expected behavior
It should return a `np.array` with `dtype` that the user provided for the corresponding member in the `Features` type value
### Environment info
- `datasets` version: 3.0.2
- Platform: Linux-6.11.5-arch1-1-x86_64-with-glibc2.40
- Python version: 3.12.7
- `huggingface_hub` version: 0.26.1
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7254/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7254/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7253 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7253/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7253/comments | https://api.github.com/repos/huggingface/datasets/issues/7253/events | https://github.com/huggingface/datasets/issues/7253 | 2,615,862,202 | I_kwDODunzps6b6uO6 | 7,253 | Unable to upload a large dataset zip either from command line or UI | {
"login": "vakyansh",
"id": 159609047,
"node_id": "U_kgDOCYNw1w",
"avatar_url": "https://avatars.githubusercontent.com/u/159609047?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vakyansh",
"html_url": "https://github.com/vakyansh",
"followers_url": "https://api.github.com/users/vakyansh/followers",
"following_url": "https://api.github.com/users/vakyansh/following{/other_user}",
"gists_url": "https://api.github.com/users/vakyansh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vakyansh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vakyansh/subscriptions",
"organizations_url": "https://api.github.com/users/vakyansh/orgs",
"repos_url": "https://api.github.com/users/vakyansh/repos",
"events_url": "https://api.github.com/users/vakyansh/events{/privacy}",
"received_events_url": "https://api.github.com/users/vakyansh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-10-26T13:17:06 | 2024-10-26T13:17:06 | null | NONE | null | ### Describe the bug
Unable to upload a large dataset zip from command line or UI. UI simply says error. I am trying to a upload a tar.gz file of 17GB.
<img width="550" alt="image" src="https://github.com/user-attachments/assets/f9d29024-06c8-49c4-a109-0492cff79d34">
<img width="755" alt="image" src="https://github.com/user-attachments/assets/a8d4acda-7f02-4279-9c2d-b2e0282b4faa">
### Steps to reproduce the bug
Upload a large file
### Expected behavior
The file should upload without any issue.
### Environment info
None | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7253/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7253/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7252 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7252/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7252/comments | https://api.github.com/repos/huggingface/datasets/issues/7252/events | https://github.com/huggingface/datasets/pull/7252 | 2,613,795,544 | PR_kwDODunzps5_41s7 | 7,252 | Add IterableDataset.shard() | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-10-25T11:07:12 | 2024-10-25T15:45:24 | 2024-10-25T15:45:22 | MEMBER | null | Will be useful to distribute a dataset across workers (other than pytorch) like spark
I also renamed `.n_shards` -> `.num_shards` for consistency and kept the old name for backward compatibility. And a few changes in internal functions for consistency as well (rank, world_size -> num_shards, index)
Breaking change: the new default for `contiguous` in `Dataset.shard()` is `True`, but imo not a big deal since I couldn't find any usage of `contiguous=False` internally (we always do contiguous=True for map-style datasets since its more optimized) or in the wild | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7252/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7252/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7252",
"html_url": "https://github.com/huggingface/datasets/pull/7252",
"diff_url": "https://github.com/huggingface/datasets/pull/7252.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7252.patch",
"merged_at": "2024-10-25T15:45:21"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7251 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7251/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7251/comments | https://api.github.com/repos/huggingface/datasets/issues/7251/events | https://github.com/huggingface/datasets/pull/7251 | 2,612,097,435 | PR_kwDODunzps5_zPTt | 7,251 | Missing video docs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-10-24T16:45:12 | 2024-10-24T16:48:29 | 2024-10-24T16:48:27 | MEMBER | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7251/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7251/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7251",
"html_url": "https://github.com/huggingface/datasets/pull/7251",
"diff_url": "https://github.com/huggingface/datasets/pull/7251.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7251.patch",
"merged_at": "2024-10-24T16:48:27"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7250 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7250/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7250/comments | https://api.github.com/repos/huggingface/datasets/issues/7250/events | https://github.com/huggingface/datasets/pull/7250 | 2,612,041,969 | PR_kwDODunzps5_zDPS | 7,250 | Basic XML support (mostly copy pasted from text) | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-10-24T16:14:50 | 2024-10-24T16:19:18 | 2024-10-24T16:19:16 | MEMBER | null | enable the viewer for datasets like https://huggingface.co/datasets/FrancophonIA/e-calm (there will be more and more apparently)
| {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7250/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7250/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7250",
"html_url": "https://github.com/huggingface/datasets/pull/7250",
"diff_url": "https://github.com/huggingface/datasets/pull/7250.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7250.patch",
"merged_at": "2024-10-24T16:19:16"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7249 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7249/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7249/comments | https://api.github.com/repos/huggingface/datasets/issues/7249/events | https://github.com/huggingface/datasets/issues/7249 | 2,610,136,636 | I_kwDODunzps6bk4Y8 | 7,249 | How to debugging | {
"login": "ShDdu",
"id": 49576595,
"node_id": "MDQ6VXNlcjQ5NTc2NTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/49576595?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ShDdu",
"html_url": "https://github.com/ShDdu",
"followers_url": "https://api.github.com/users/ShDdu/followers",
"following_url": "https://api.github.com/users/ShDdu/following{/other_user}",
"gists_url": "https://api.github.com/users/ShDdu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ShDdu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShDdu/subscriptions",
"organizations_url": "https://api.github.com/users/ShDdu/orgs",
"repos_url": "https://api.github.com/users/ShDdu/repos",
"events_url": "https://api.github.com/users/ShDdu/events{/privacy}",
"received_events_url": "https://api.github.com/users/ShDdu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-10-24T01:03:51 | 2024-10-24T01:03:51 | null | NONE | null | ### Describe the bug
I wanted to use my own script to handle the processing, and followed the tutorial documentation by rewriting the MyDatasetConfig and MyDatasetBuilder (which contains the _info,_split_generators and _generate_examples methods) classes. Testing with simple data was able to output the results of the processing, but when I wished to do more complex processing, I found that I was unable to debug (even the simple samples were inaccessible). There are no errors reported, and I am able to print the _info,_split_generators and _generate_examples messages, but I am unable to access the breakpoints.
### Steps to reproduce the bug
# my_dataset.py
import json
import datasets
class MyDatasetConfig(datasets.BuilderConfig):
def __init__(self, **kwargs):
super(MyDatasetConfig, self).__init__(**kwargs)
class MyDataset(datasets.GeneratorBasedBuilder):
VERSION = datasets.Version("1.0.0")
BUILDER_CONFIGS = [
MyDatasetConfig(
name="default",
version=VERSION,
description="myDATASET"
),
]
def _info(self):
print("info") # breakpoints
return datasets.DatasetInfo(
description="myDATASET",
features=datasets.Features(
{
"id": datasets.Value("int32"),
"text": datasets.Value("string"),
"label": datasets.ClassLabel(names=["negative", "positive"]),
}
),
supervised_keys=("text", "label"),
)
def _split_generators(self, dl_manager):
print("generate") # breakpoints
data_file = "data.json"
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN, gen_kwargs={"filepath": data_file}
),
]
def _generate_examples(self, filepath):
print("example") # breakpoints
with open(filepath, encoding="utf-8") as f:
data = json.load(f)
for idx, sample in enumerate(data):
yield idx, {
"id": sample["id"],
"text": sample["text"],
"label": sample["label"],
}
#main.py
import os
os.environ["TRANSFORMERS_NO_MULTIPROCESSING"] = "1"
from datasets import load_dataset
dataset = load_dataset("my_dataset.py", split="train", cache_dir=None)
print(dataset[:5])
### Expected behavior
Pause at breakpoints while running debugging
### Environment info
pycharm
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7249/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7249/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7248 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7248/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7248/comments | https://api.github.com/repos/huggingface/datasets/issues/7248/events | https://github.com/huggingface/datasets/issues/7248 | 2,609,926,089 | I_kwDODunzps6bkE_J | 7,248 | ModuleNotFoundError: No module named 'datasets.tasks' | {
"login": "shoowadoo",
"id": 93593941,
"node_id": "U_kgDOBZQhVQ",
"avatar_url": "https://avatars.githubusercontent.com/u/93593941?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shoowadoo",
"html_url": "https://github.com/shoowadoo",
"followers_url": "https://api.github.com/users/shoowadoo/followers",
"following_url": "https://api.github.com/users/shoowadoo/following{/other_user}",
"gists_url": "https://api.github.com/users/shoowadoo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shoowadoo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shoowadoo/subscriptions",
"organizations_url": "https://api.github.com/users/shoowadoo/orgs",
"repos_url": "https://api.github.com/users/shoowadoo/repos",
"events_url": "https://api.github.com/users/shoowadoo/events{/privacy}",
"received_events_url": "https://api.github.com/users/shoowadoo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-10-23T21:58:25 | 2024-10-24T17:00:19 | null | NONE | null | ### Describe the bug
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
[<ipython-input-9-13b5f31bd391>](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in <cell line: 1>()
----> 1 dataset = load_dataset('knowledgator/events_classification_biotech')
11 frames
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
2130
2131 # Create a dataset builder
-> 2132 builder_instance = load_dataset_builder(
2133 path=path,
2134 name=name,
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs)
1886 raise ValueError(error_msg)
1887
-> 1888 builder_cls = get_dataset_builder_class(dataset_module, dataset_name=dataset_name)
1889 # Instantiate the dataset builder
1890 builder_instance: DatasetBuilder = builder_cls(
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in get_dataset_builder_class(dataset_module, dataset_name)
246 dataset_module.importable_file_path
247 ) if dataset_module.importable_file_path else nullcontext():
--> 248 builder_cls = import_main_class(dataset_module.module_path)
249 if dataset_module.builder_configs_parameters.builder_configs:
250 dataset_name = dataset_name or dataset_module.builder_kwargs.get("dataset_name")
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in import_main_class(module_path)
167 def import_main_class(module_path) -> Optional[Type[DatasetBuilder]]:
168 """Import a module at module_path and return its main class: a DatasetBuilder"""
--> 169 module = importlib.import_module(module_path)
170 # Find the main class in our imported module
171 module_main_cls = None
[/usr/lib/python3.10/importlib/__init__.py](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in import_module(name, package)
124 break
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
127
128
/usr/lib/python3.10/importlib/_bootstrap.py in _gcd_import(name, package, level)
/usr/lib/python3.10/importlib/_bootstrap.py in _find_and_load(name, import_)
/usr/lib/python3.10/importlib/_bootstrap.py in _find_and_load_unlocked(name, import_)
/usr/lib/python3.10/importlib/_bootstrap.py in _load_unlocked(spec)
/usr/lib/python3.10/importlib/_bootstrap_external.py in exec_module(self, module)
/usr/lib/python3.10/importlib/_bootstrap.py in _call_with_frames_removed(f, *args, **kwds)
[~/.cache/huggingface/modules/datasets_modules/datasets/knowledgator--events_classification_biotech/9c8086d498c3104de3a3c5b6640837e18ccd829dcaca49f1cdffe3eb5c4a6361/events_classification_biotech.py](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in <module>
1 import datasets
2 from datasets import load_dataset
----> 3 from datasets.tasks import TextClassification
4
5 DESCRIPTION = """
ModuleNotFoundError: No module named 'datasets.tasks'
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------
### Steps to reproduce the bug
!pip install datasets
from datasets import load_dataset
dataset = load_dataset('knowledgator/events_classification_biotech')
### Expected behavior
no ModuleNotFoundError
### Environment info
google colab | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7248/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7248/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7247 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7247/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7247/comments | https://api.github.com/repos/huggingface/datasets/issues/7247/events | https://github.com/huggingface/datasets/issues/7247 | 2,606,230,029 | I_kwDODunzps6bV-oN | 7,247 | Adding column with dict struction when mapping lead to wrong order | {
"login": "chchch0109",
"id": 114604968,
"node_id": "U_kgDOBtS7qA",
"avatar_url": "https://avatars.githubusercontent.com/u/114604968?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chchch0109",
"html_url": "https://github.com/chchch0109",
"followers_url": "https://api.github.com/users/chchch0109/followers",
"following_url": "https://api.github.com/users/chchch0109/following{/other_user}",
"gists_url": "https://api.github.com/users/chchch0109/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chchch0109/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chchch0109/subscriptions",
"organizations_url": "https://api.github.com/users/chchch0109/orgs",
"repos_url": "https://api.github.com/users/chchch0109/repos",
"events_url": "https://api.github.com/users/chchch0109/events{/privacy}",
"received_events_url": "https://api.github.com/users/chchch0109/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-10-22T18:55:11 | 2024-10-22T18:55:23 | null | NONE | null | ### Describe the bug
in `map()` function, I want to add a new column with a dict structure.
```
def map_fn(example):
example['text'] = {'user': ..., 'assistant': ...}
return example
```
However this leads to a wrong order `{'assistant':..., 'user':...}` in the dataset.
Thus I can't concatenate two datasets due to the different feature structures.
[Here](https://colab.research.google.com/drive/1zeaWq9Ith4DKWP_EiBNyLfc8S8I68LyY?usp=sharing) is a minimal reproducible example
This seems an issue in low level pyarrow library instead of datasets, however, I think datasets should allow concatenate two datasets actually in the same structure.
### Steps to reproduce the bug
[Here](https://colab.research.google.com/drive/1zeaWq9Ith4DKWP_EiBNyLfc8S8I68LyY?usp=sharing) is a minimal reproducible example
### Expected behavior
two datasets could be concatenated.
### Environment info
N/A | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7247/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7247/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7246 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7246/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7246/comments | https://api.github.com/repos/huggingface/datasets/issues/7246/events | https://github.com/huggingface/datasets/pull/7246 | 2,605,734,447 | PR_kwDODunzps5_ehPi | 7,246 | Set dev version | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-10-22T15:04:47 | 2024-10-22T15:07:31 | 2024-10-22T15:04:58 | MEMBER | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7246/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7246/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7246",
"html_url": "https://github.com/huggingface/datasets/pull/7246",
"diff_url": "https://github.com/huggingface/datasets/pull/7246.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7246.patch",
"merged_at": "2024-10-22T15:04:58"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7245 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7245/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7245/comments | https://api.github.com/repos/huggingface/datasets/issues/7245/events | https://github.com/huggingface/datasets/pull/7245 | 2,605,701,235 | PR_kwDODunzps5_eaiE | 7,245 | Release: 3.0.2 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-10-22T14:53:34 | 2024-10-22T15:01:50 | 2024-10-22T15:01:47 | MEMBER | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7245/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7245/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7245",
"html_url": "https://github.com/huggingface/datasets/pull/7245",
"diff_url": "https://github.com/huggingface/datasets/pull/7245.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7245.patch",
"merged_at": "2024-10-22T15:01:47"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7244 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7244/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7244/comments | https://api.github.com/repos/huggingface/datasets/issues/7244/events | https://github.com/huggingface/datasets/pull/7244 | 2,605,461,515 | PR_kwDODunzps5_dqWP | 7,244 | use huggingface_hub offline mode | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-10-22T13:27:16 | 2024-10-22T14:10:45 | 2024-10-22T14:10:20 | MEMBER | null | and better handling of LocalEntryNotfoundError cc @Wauplin
follow up to #7234 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7244/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7244/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7244",
"html_url": "https://github.com/huggingface/datasets/pull/7244",
"diff_url": "https://github.com/huggingface/datasets/pull/7244.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7244.patch",
"merged_at": "2024-10-22T14:10:20"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7243 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7243/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7243/comments | https://api.github.com/repos/huggingface/datasets/issues/7243/events | https://github.com/huggingface/datasets/issues/7243 | 2,602,853,172 | I_kwDODunzps6bJGM0 | 7,243 | ArrayXD with None as leading dim incompatible with DatasetCardData | {
"login": "alex-hh",
"id": 5719745,
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alex-hh",
"html_url": "https://github.com/alex-hh",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 5 | 2024-10-21T15:08:13 | 2024-10-22T14:18:10 | null | CONTRIBUTOR | null | ### Describe the bug
Creating a dataset with ArrayXD features leads to errors when downloading from hub due to DatasetCardData removing the Nones
@lhoestq
### Steps to reproduce the bug
```python
import numpy as np
from datasets import Array2D, Dataset, Features, load_dataset
def examples_generator():
for i in range(4):
yield {
"array_1d": np.zeros((10,1), dtype="uint16"),
"array_2d": np.zeros((10, 1), dtype="uint16"),
}
features = Features(array_1d=Array2D((None,1), "uint16"), array_2d=Array2D((None, 1), "uint16"))
dataset = Dataset.from_generator(examples_generator, features=features)
dataset.push_to_hub("alex-hh/test_array_1d2d")
ds = load_dataset("alex-hh/test_array_1d2d")
```
Source of error appears to be DatasetCardData.to_dict invoking DatasetCardData._remove_none
```python
from huggingface_hub import DatasetCardData
from datasets.info import DatasetInfosDict
dataset_card_data = DatasetCardData()
DatasetInfosDict({"default": dataset.info.copy()}).to_dataset_card_data(dataset_card_data)
print(dataset_card_data.to_dict()) # removes Nones in shape
```
### Expected behavior
Should be possible to load datasets saved with shape None in leading dimension
### Environment info
3.0.2 and latest huggingface_hub | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7243/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7243/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7241 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7241/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7241/comments | https://api.github.com/repos/huggingface/datasets/issues/7241/events | https://github.com/huggingface/datasets/issues/7241 | 2,599,899,156 | I_kwDODunzps6a91AU | 7,241 | `push_to_hub` overwrite argument | {
"login": "ceferisbarov",
"id": 60838378,
"node_id": "MDQ6VXNlcjYwODM4Mzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/60838378?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ceferisbarov",
"html_url": "https://github.com/ceferisbarov",
"followers_url": "https://api.github.com/users/ceferisbarov/followers",
"following_url": "https://api.github.com/users/ceferisbarov/following{/other_user}",
"gists_url": "https://api.github.com/users/ceferisbarov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ceferisbarov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ceferisbarov/subscriptions",
"organizations_url": "https://api.github.com/users/ceferisbarov/orgs",
"repos_url": "https://api.github.com/users/ceferisbarov/repos",
"events_url": "https://api.github.com/users/ceferisbarov/events{/privacy}",
"received_events_url": "https://api.github.com/users/ceferisbarov/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 9 | 2024-10-20T03:23:26 | 2024-10-24T17:39:08 | 2024-10-24T17:39:08 | NONE | null | ### Feature request
Add an `overwrite` argument to the `push_to_hub` method.
### Motivation
I want to overwrite a repo without deleting it on Hugging Face. Is this possible? I couldn't find anything in the documentation or tutorials.
### Your contribution
I can create a PR. | {
"login": "ceferisbarov",
"id": 60838378,
"node_id": "MDQ6VXNlcjYwODM4Mzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/60838378?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ceferisbarov",
"html_url": "https://github.com/ceferisbarov",
"followers_url": "https://api.github.com/users/ceferisbarov/followers",
"following_url": "https://api.github.com/users/ceferisbarov/following{/other_user}",
"gists_url": "https://api.github.com/users/ceferisbarov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ceferisbarov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ceferisbarov/subscriptions",
"organizations_url": "https://api.github.com/users/ceferisbarov/orgs",
"repos_url": "https://api.github.com/users/ceferisbarov/repos",
"events_url": "https://api.github.com/users/ceferisbarov/events{/privacy}",
"received_events_url": "https://api.github.com/users/ceferisbarov/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7241/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7241/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7240 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7240/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7240/comments | https://api.github.com/repos/huggingface/datasets/issues/7240/events | https://github.com/huggingface/datasets/pull/7240 | 2,598,980,027 | PR_kwDODunzps5_KxSL | 7,240 | Feature Request: Add functionality to pass split types like train, test in DatasetDict.map | {
"login": "jp1924",
"id": 93233241,
"node_id": "U_kgDOBY6gWQ",
"avatar_url": "https://avatars.githubusercontent.com/u/93233241?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jp1924",
"html_url": "https://github.com/jp1924",
"followers_url": "https://api.github.com/users/jp1924/followers",
"following_url": "https://api.github.com/users/jp1924/following{/other_user}",
"gists_url": "https://api.github.com/users/jp1924/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jp1924/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jp1924/subscriptions",
"organizations_url": "https://api.github.com/users/jp1924/orgs",
"repos_url": "https://api.github.com/users/jp1924/repos",
"events_url": "https://api.github.com/users/jp1924/events{/privacy}",
"received_events_url": "https://api.github.com/users/jp1924/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-10-19T09:59:12 | 2024-10-19T09:59:12 | null | NONE | null | Hello datasets!
We often encounter situations where we need to preprocess data differently depending on split types such as train, valid, and test.
However, while DatasetDict.map has features to pass rank or index, there's no functionality to pass split types.
Therefore, I propose adding a 'with_splits' parameter to DatasetDict, which would allow passing the split type through fn_kwargs. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7240/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7240/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7240",
"html_url": "https://github.com/huggingface/datasets/pull/7240",
"diff_url": "https://github.com/huggingface/datasets/pull/7240.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7240.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7238 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7238/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7238/comments | https://api.github.com/repos/huggingface/datasets/issues/7238/events | https://github.com/huggingface/datasets/issues/7238 | 2,598,409,993 | I_kwDODunzps6a4JcJ | 7,238 | incompatibily issue when using load_dataset with datasets==3.0.1 | {
"login": "jupiterMJM",
"id": 74985234,
"node_id": "MDQ6VXNlcjc0OTg1MjM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74985234?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jupiterMJM",
"html_url": "https://github.com/jupiterMJM",
"followers_url": "https://api.github.com/users/jupiterMJM/followers",
"following_url": "https://api.github.com/users/jupiterMJM/following{/other_user}",
"gists_url": "https://api.github.com/users/jupiterMJM/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jupiterMJM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jupiterMJM/subscriptions",
"organizations_url": "https://api.github.com/users/jupiterMJM/orgs",
"repos_url": "https://api.github.com/users/jupiterMJM/repos",
"events_url": "https://api.github.com/users/jupiterMJM/events{/privacy}",
"received_events_url": "https://api.github.com/users/jupiterMJM/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2024-10-18T21:25:23 | 2024-12-09T09:49:32 | null | NONE | null | ### Describe the bug
There is a bug when using load_dataset with dataset version at 3.0.1 .
Please see below in the "steps to reproduce the bug".
To resolve the bug, I had to downgrade to version 2.21.0
OS: Ubuntu 24 (AWS instance)
Python: same bug under 3.12 and 3.10
The error I had was:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ubuntu/miniconda3/envs/maxence_env/lib/python3.10/site-packages/datasets/load.py", line 2096, in load_dataset
builder_instance.download_and_prepare(
File "/home/ubuntu/miniconda3/envs/maxence_env/lib/python3.10/site-packages/datasets/builder.py", line 924, in download_and_prepare
self._download_and_prepare(
File "/home/ubuntu/miniconda3/envs/maxence_env/lib/python3.10/site-packages/datasets/builder.py", line 1647, in _download_and_prepare
super()._download_and_prepare(
File "/home/ubuntu/miniconda3/envs/maxence_env/lib/python3.10/site-packages/datasets/builder.py", line 977, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/ubuntu/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_6_0/cb17afd34f5799f97e8f48398748f83006335b702bd785f9880797838d541b81/common_voice_6_0.py", line 159, in _split_generators
archive_path = dl_manager.download(self._get_bundle_url(self.config.name, bundle_url_template))
File "/home/ubuntu/miniconda3/envs/maxence_env/lib/python3.10/site-packages/datasets/download/download_manager.py", line 150, in download
download_config = self.download_config.copy()
File "/home/ubuntu/miniconda3/envs/maxence_env/lib/python3.10/site-packages/datasets/download/download_config.py", line 73, in copy
return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
TypeError: DownloadConfig.__init__() got an unexpected keyword argument 'ignore_url_params'
### Steps to reproduce the bug
1. install dataset with ```pip install datasets --upgrade```
2. launch python; from datasets import loaad_dataset
3. run load_dataset("mozilla-foundation/common_voice_6_0")
4. exit python
5. uninstall datasets; then ```pip install datasets==2.21.0```
6. launch python; from datasets import loaad_dataset
7. run load_dataset("mozilla-foundation/common_voice_6_0")
8. Everything runs great now
### Expected behavior
Be able to download a dataset without error
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 3.0.1
- Platform: Linux-6.8.0-1017-aws-x86_64-with-glibc2.39
- Python version: 3.12.4
- `huggingface_hub` version: 0.26.0
- PyArrow version: 17.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.6.1 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7238/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7238/timeline | null | null | null | null | false |