Update README.md
Browse files
README.md
CHANGED
@@ -20,7 +20,6 @@ dataset_info:
|
|
20 |
dataset_size: 2042850034
|
21 |
task_categories:
|
22 |
- summarization
|
23 |
-
- question-answering
|
24 |
language:
|
25 |
- en
|
26 |
- de
|
@@ -28,18 +27,27 @@ language:
|
|
28 |
- it
|
29 |
- es
|
30 |
size_categories:
|
31 |
-
-
|
|
|
|
|
|
|
|
|
32 |
---
|
33 |
-
# Dataset Card for "sumstew
|
|
|
|
|
|
|
|
|
|
|
34 |
|
35 |
## Dataset Description
|
36 |
|
37 |
- **Dataset Identifier**: sumstew
|
38 |
-
- **Dataset Summary**: "SumStew" is a rich multilingual dataset for text summarization
|
39 |
|
40 |
## Task Information
|
41 |
|
42 |
-
- **Task Categories**: The tasks covered by this dataset are primarily summarization
|
43 |
- **Languages**: This dataset supports multiple languages including English (en), German (de), French (fr), Italian (it), and Spanish (es).
|
44 |
|
45 |
## Dataset Structure
|
@@ -47,19 +55,15 @@ size_categories:
|
|
47 |
- **Data Instances**: Each data instance in the dataset comprises five fields - 'prompt', 'target', 'task', 'subset', and 'language'.
|
48 |
- 'prompt': The input text for the task. (dtype: string)
|
49 |
- 'target': The expected output for the task. (dtype: string)
|
50 |
-
- 'task': The type of task to be performed. (dtype: string)
|
51 |
- 'subset': The subset of the dataset the instance belongs to. (dtype: string)
|
52 |
- 'language': The language of the instance. (dtype: string)
|
53 |
|
54 |
- **Data Splits**: The dataset is split into two subsets:
|
55 |
-
- 'Train' set:
|
56 |
-
- 'Validation' set:
|
57 |
|
58 |
## Dataset Statistics
|
59 |
|
60 |
-
- **Dataset Size Categories**: The total dataset size falls in the range of 1M<n<10M.
|
61 |
-
- **Download Size**: 3,460,794,602 bytes
|
62 |
-
- **Total Dataset Size**: 5,345,907,512 bytes
|
63 |
- **Max Document Length**: The maximum document length is 8192 mlong-t5 tokens.
|
64 |
- **Max Output Length**: The maximum output length is 1024 mlong-t5 tokens.
|
65 |
|
|
|
20 |
dataset_size: 2042850034
|
21 |
task_categories:
|
22 |
- summarization
|
|
|
23 |
language:
|
24 |
- en
|
25 |
- de
|
|
|
27 |
- it
|
28 |
- es
|
29 |
size_categories:
|
30 |
+
- 100K<n<1M
|
31 |
+
license: apache-2.0
|
32 |
+
tags:
|
33 |
+
- chemistry
|
34 |
+
- biology
|
35 |
---
|
36 |
+
# Dataset Card for "sumstew"
|
37 |
+
|
38 |
+
## TL;DR:
|
39 |
+
|
40 |
+
Sumstew is a abstractive, multilingual Dataset, with a balanced number of samples from a diverse set of summarization Datasets. The input sizes range up to 8192 tokens.
|
41 |
+
Filtered using a diverse set of heuristics to encourage high coverage, accuracy and factual consistency. Code to reproduce Dataset available at *TODO*
|
42 |
|
43 |
## Dataset Description
|
44 |
|
45 |
- **Dataset Identifier**: sumstew
|
46 |
+
- **Dataset Summary**: "SumStew" is a rich multilingual dataset for text summarization. It incorporates diverse data sources such as cnn_dailymail, samsum, mlsum (de, fr, es, it), klexikon, xlsum (fr, en, es), govreport, sciqa, piqa, pumbed_qa, multinews, laysum, booksum, dialogsum, fanpage (it), ilpost (it). This data has been curated by filtering based on n-gram overlap between the source and target documents and normalized to prevent undue bias. Every instance in this dataset is prefixed by an instruction (title, summary, or qa).
|
47 |
|
48 |
## Task Information
|
49 |
|
50 |
+
- **Task Categories**: The tasks covered by this dataset are primarily summarization tasks.
|
51 |
- **Languages**: This dataset supports multiple languages including English (en), German (de), French (fr), Italian (it), and Spanish (es).
|
52 |
|
53 |
## Dataset Structure
|
|
|
55 |
- **Data Instances**: Each data instance in the dataset comprises five fields - 'prompt', 'target', 'task', 'subset', and 'language'.
|
56 |
- 'prompt': The input text for the task. (dtype: string)
|
57 |
- 'target': The expected output for the task. (dtype: string)
|
|
|
58 |
- 'subset': The subset of the dataset the instance belongs to. (dtype: string)
|
59 |
- 'language': The language of the instance. (dtype: string)
|
60 |
|
61 |
- **Data Splits**: The dataset is split into two subsets:
|
62 |
+
- 'Train' set: 314114 examples
|
63 |
+
- 'Validation' set: 11143 examples
|
64 |
|
65 |
## Dataset Statistics
|
66 |
|
|
|
|
|
|
|
67 |
- **Max Document Length**: The maximum document length is 8192 mlong-t5 tokens.
|
68 |
- **Max Output Length**: The maximum output length is 1024 mlong-t5 tokens.
|
69 |
|