corpus-ckb / README.md
PawanOsman's picture
Update README.md
2084973 verified
metadata
dataset_info:
  features:
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 3967568183
      num_examples: 2131752
  download_size: 1773193447
  dataset_size: 3967568183
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
language:
  - ckb
task_categories:
  - text-classification
  - table-question-answering
  - translation
  - text-generation
  - text2text-generation
size_categories:
  - 1M<n<10M

Dataset Overview

The corpus-ckb is a large-scale text dataset primarily composed of Kurdish texts. It is intended for use in various natural language processing (NLP) tasks such as text classification, language modeling, and machine translation. The dataset is particularly useful for researchers and developers working with Kurdish (Central Kurdish, ckb) language data.

Dataset Details

Dataset Info

  • Features: The dataset contains a single feature:

    • text: A string representing a text snippet in Kurdish.
  • Splits: The dataset is split into training data:

    • train split contains 2,131,752 examples, amounting to 3,967,568,183 bytes.
  • Download Size: The dataset can be downloaded, with a total size of 1,773,193,447 bytes.

  • Dataset Size: The total size of the dataset is 3,967,568,183 bytes.

Configurations

  • Config Name: default
    • Data Files: The training data is stored across multiple files, each prefixed with data/train-*.

Language

  • Language: The dataset contains texts in Central Kurdish (ckb).

Usage

This dataset is suitable for a variety of NLP applications including but not limited to:

  • Text classification: Training models to classify texts into predefined categories.
  • Language modeling: Developing language models that can understand or generate Kurdish text.
  • Machine translation: Creating models to translate between Kurdish and other languages.

Limitations and Considerations

  • Data Quality: Ensure to evaluate the quality of the dataset for your specific use case, as the dataset might contain noise or inconsistencies typical of large-scale text collections.
  • Ethical Use: Be mindful of the ethical implications of using this dataset, especially concerning the representation and handling of cultural and linguistic nuances.