Datasets:
Tasks:
Summarization
Formats:
csv
Languages:
English
Size:
10K - 100K
Tags:
stacked summaries
License:
clarify
Browse files
README.md
CHANGED
@@ -15,9 +15,7 @@ size_categories:
|
|
15 |
|
16 |
# stacked samsum 1024
|
17 |
|
18 |
-
Created with the `stacked-booksum` repo version v0.25. It contains
|
19 |
-
|
20 |
-
To summarize the requested information in a cohesive numeric list:
|
21 |
|
22 |
1. Original Dataset: copy of the base dataset
|
23 |
|
@@ -25,7 +23,7 @@ To summarize the requested information in a cohesive numeric list:
|
|
25 |
- Maximum Input Length: The maximum length for input sequences is 1024 tokens in the longt5 model tokenizer.
|
26 |
- Maximum Output Length: The maximum length for output sequences is also 1024 tokens in the longt5 model tokenizer.
|
27 |
|
28 |
-
3. Special Token: The dataset utilizes the `[NEXT_CONCEPT]` token to indicate a new topic **within** the same summary. It is
|
29 |
|
30 |
## stats
|
31 |
|
|
|
15 |
|
16 |
# stacked samsum 1024
|
17 |
|
18 |
+
Created with the `stacked-booksum` repo version v0.25. It contains:
|
|
|
|
|
19 |
|
20 |
1. Original Dataset: copy of the base dataset
|
21 |
|
|
|
23 |
- Maximum Input Length: The maximum length for input sequences is 1024 tokens in the longt5 model tokenizer.
|
24 |
- Maximum Output Length: The maximum length for output sequences is also 1024 tokens in the longt5 model tokenizer.
|
25 |
|
26 |
+
3. Special Token: The dataset utilizes the `[NEXT_CONCEPT]` token to indicate a new topic **within** the same summary. It is recommended to explicitly add this special token to your model's tokenizer before training, ensuring that it is recognized and processed correctly during downstream usage.
|
27 |
|
28 |
## stats
|
29 |
|