The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 298, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 80, in _split_generators
                  raise ValueError(
              ValueError: The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 352, in get_dataset_split_names
                  info = get_dataset_config_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 303, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

🎨 Danbooru2024 Dataset

Dataset Size Language Category

πŸ“Š Dataset Overview

The Danbooru2024 dataset is a comprehensive collection focused on animation and illustration artwork, derived from the official Danbooru platform. It contains approximately 6.5 million high-quality, user-annotated images with corresponding tags and textual descriptions.

This dataset is filtered from an original set of 8.3 million entries, excluding NSFW-rated, opt-out entries to create a more accessible and audience-friendly resource. It addresses the challenges associated with overly crawled booru databases by providing a curated and well-structured solution.

✨ Features

πŸ“‹ Metadata Support

Includes a Parquet format metadata.

Example code for usage:

# install necessary packages, you can choose pyarrow or fastparquet
#%pip install pandas pyarrow

from tqdm.auto import tqdm
import pandas as pd
tqdm.pandas() # register progress_apply

# read parquet file
df = pd.read_parquet('metadata.parquet')
print(df.head()) # check the first 5 rows

#print(df.columns) # you can check the columns of the dataframe
necessary_rows = [
    "created_at", "score", "rating", "tag_string", "up_score", 
    "down_score", "fav_count"
]
df = df[necessary_rows] # shrink the dataframe to only necessary columns
df['created_at'] = pd.to_datetime(df['created_at']) # convert to datetime

datetime_start = pd.Timestamp('2007-01-01', tz='UTC')
datetime_end = pd.Timestamp('2008-01-01', tz='UTC')
subdf = df[(df['created_at'] >= datetime_start) & 
           (df['created_at'] < datetime_end)]

# count some rating
print(subdf['rating'].value_counts())
# export subdataframe
subdf.to_parquet('metadata-2007.parquet')

πŸ“₯ Partial Downloads

To simplify downloading specific entries, use the CheeseChaser library:

#%pip install cheesechaser # >=0.2.0
from cheesechaser.datapool import Danbooru2024SfwDataPool
from cheesechaser.query import DanbooruIdQuery

pool = Danbooru2024SfwDataPool()
#my_waifu_ids = DanbooruIdQuery(['surtr_(arknights)', 'solo']) 
# above is only available when Danbooru is accessible, if not, use following:
import pandas as pd

# read parquet file
df = pd.read_parquet('metadata.parquet', 
                     columns=['id', 'tag_string']) # read only necessary columns
                     
#surtr_(arknights) -> gets interpreted as regex so we need to escape the brackets
subdf = df[df['tag_string'].str.contains('surtr_\\(arknights\\)') & 
           df['tag_string'].str.contains('solo')]
ids = subdf.index.tolist()
print(ids[:5]) # check the first 5 ids

# download danbooru images with surtr+solo, to directory /data/exp2_surtr
pool.batch_download_to_directory(
    resource_ids=ids,
    dst_dir='/data/exp2_surtr',
    max_workers=12,
)

Terms and Conditions for Dataset Use

General Use Requirements

  • User Responsibility:
    Users must possess sufficient knowledge and expertise to use the dataset appropriately. Any derived works or outputs created using the dataset are the sole responsibility of the user. The creators of the dataset do not offer any guarantees or warranties regarding the outcomes or uses of such derived works.

License Agreement

  • Mandatory Agreement:
    Usage of this dataset is contingent upon the user’s acceptance of the associated LICENSE terms. Without agreement, users are prohibited from utilizing the dataset.

  • Modifications and Updates:
    The dataset may be subject to updates or changes over time. These modifications are governed by the same LICENSE terms and conditions.

  • Opt-Out Compliance:
    The dataset aligns with the opt-out policy of the original booru database. If applicable, any modifications to this policy will be reflected and respected in the dataset.

Prohibited Uses

The dataset explicitly prohibits the following activities:

  1. Harmful or Malicious Activities:

    • Using the dataset or its outputs to harass, threaten, or intimidate individuals or groups.
    • Spreading false or misleading information.
    • Any use intended to cause harm to individuals, organizations, or society.
  2. Illegal Activities:

    • Generating content or outputs that violate local, national, or international laws.
    • Any use that breaches applicable regulations or promotes unlawful actions.
  3. Unethical or Offensive Content Modification:

    • Modified for producing controversial materials that go against ethical guidelines or community standards.
    • Any use that could incite hate, violence, or discrimination.

User Agreement and Acknowledgment

By using this dataset, users explicitly agree to:

  • Adhere to the conditions specified in the LICENSE.
  • Take full responsibility for how the dataset and its outputs are utilized, including any consequences resulting from their use.

Disclaimer

  • No Warranties:
    The creators of the dataset provide it "as is" and make no warranties regarding the dataset's quality, reliability, or fitness for any particular purpose.

  • Indemnification:
    Users agree to indemnify and hold harmless the creators against any claims, damages, or liabilities arising from their use of the dataset.

🏷️ Dataset Information

  • License: Other
  • Task Categories:
    • Image Classification
    • Zero-shot Image Classification
    • Text-to-Image
  • Languages:
    • English
    • Japanese
  • Tags:
    • Art
    • Anime
  • Size Category: 1M < n < 10M
  • Annotation Creators: No annotation
  • Source Datasets: Danbooru

Note: This dataset is provided for research and development purposes. Please ensure compliance with all applicable usage terms and conditions.

Downloads last month
808

Collection including deepghs/danbooru2024-sfw