File size: 3,228 Bytes
023e76f 8abaebb 644be4a 023e76f c98384e 1ee3aed 023e76f 24bcfa5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 |
---
dataset_info:
features:
- name: prompt
dtype: string
- name: target
dtype: string
- name: task
dtype: string
- name: subset
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 5017100271
num_examples: 1278891
- name: validation
num_bytes: 328807241
num_examples: 78903
download_size: 3460794602
dataset_size: 5345907512
task_categories:
- summarization
- question-answering
language:
- en
- de
- fr
- it
- es
size_categories:
- 1M<n<10M
---
# Dataset Card for "sumstew-16k"
## Dataset Description
- **Dataset Identifier**: sumstew
- **Dataset Summary**: "SumStew" is a rich multilingual dataset for text summarization and question answering tasks. It incorporates diverse data sources such as cnn_dailymail, samsum, mlsum (de, fr, es, it), klexikon, xlsum (fr, en, es), govreport, sciqa, piqa, pumbed_qa, multinews, laysum, booksum, dialogsum, fanpage (it), ilpost (it). This data has been curated by filtering based on n-gram overlap between the source and target documents and normalized to prevent undue bias. Every instance in this dataset is prefixed by an instruction (title, summary, or qa).
## Task Information
- **Task Categories**: The tasks covered by this dataset are primarily summarization and question-answering tasks.
- **Languages**: This dataset supports multiple languages including English (en), German (de), French (fr), Italian (it), and Spanish (es).
## Dataset Structure
- **Data Instances**: Each data instance in the dataset comprises five fields - 'prompt', 'target', 'task', 'subset', and 'language'.
- 'prompt': The input text for the task. (dtype: string)
- 'target': The expected output for the task. (dtype: string)
- 'task': The type of task to be performed. (dtype: string)
- 'subset': The subset of the dataset the instance belongs to. (dtype: string)
- 'language': The language of the instance. (dtype: string)
- **Data Splits**: The dataset is split into two subsets:
- 'Train' set: 1,278,891 examples (5,017,100,271 bytes)
- 'Validation' set: 78,903 examples (328,807,241 bytes)
## Dataset Statistics
- **Dataset Size Categories**: The total dataset size falls in the range of 1M<n<10M.
- **Download Size**: 3,460,794,602 bytes
- **Total Dataset Size**: 5,345,907,512 bytes
- **Max Document Length**: The maximum document length is 8192 mlong-t5 tokens.
- **Max Output Length**: The maximum output length is 1024 mlong-t5 tokens.
## Additional Information
- **Data Collection**: The data has been collected from a variety of sources spanning different languages and domains, ensuring a diverse and comprehensive dataset.
- **Data Cleaning**: The dataset has been filtered by checking the ngram overlap between the source and target document and dropping samples which have too much or too little overlap, and also through normalization.
- **Known Limitations**: As the dataset is generated from diverse sources, the inherent biases or limitations of those sources may persist in this dataset as well.
- **Usage Scenarios**: This dataset can be used for training and evaluating models on tasks like summarization and question-answering, in a multilingual context.
|