Datasets:
Tasks:
Time Series Forecasting
Modalities:
Image
Formats:
imagefolder
Size:
< 1K
ArXiv:
Tags:
time-series
multimodality
pretrained-model
foundation-model
multimodal-time-series-foundation-model
License:
ChengsenWang
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -24,7 +24,7 @@ For details on ChatTime models, training data and procedures, and experimental r
|
|
24 |
|
25 |
![](architecture.png)
|
26 |
|
27 |
-
##
|
28 |
|
29 |
The data for continuous pre-training is sourced from two extensive open-source time series repositories, [Monash](https://forecastingdata.org/) and [TFB](https://github.com/decisionintelligence/TFB), encompassing approximately 100 sub-datasets. We apply sliding slices to the original time series using five distinct window and step sizes, as illustrated in the following table. We prioritize slicing the original time series into larger segments. Given the numerous repeating patterns and the limited computational resources, we perform K-means on 10M original time series slices. We categorize them into 1M and 25K groups, randomly selecting one sample from each group to serve as a representative. Consequently, we create a high-quality dataset for continuous pre-training (1M) and instruction fine-tuning (25K).
|
30 |
|
|
|
24 |
|
25 |
![](architecture.png)
|
26 |
|
27 |
+
## 💾 Dataset
|
28 |
|
29 |
The data for continuous pre-training is sourced from two extensive open-source time series repositories, [Monash](https://forecastingdata.org/) and [TFB](https://github.com/decisionintelligence/TFB), encompassing approximately 100 sub-datasets. We apply sliding slices to the original time series using five distinct window and step sizes, as illustrated in the following table. We prioritize slicing the original time series into larger segments. Given the numerous repeating patterns and the limited computational resources, we perform K-means on 10M original time series slices. We categorize them into 1M and 25K groups, randomly selecting one sample from each group to serve as a representative. Consequently, we create a high-quality dataset for continuous pre-training (1M) and instruction fine-tuning (25K).
|
30 |
|