ChengsenWang commited on
Commit
ee73185
·
verified ·
1 Parent(s): 423b3ad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -24,7 +24,7 @@ For details on ChatTime models, training data and procedures, and experimental r
24
 
25
  ![](architecture.png)
26
 
27
- ## 📝 Dataset
28
 
29
  The data for continuous pre-training is sourced from two extensive open-source time series repositories, [Monash](https://forecastingdata.org/) and [TFB](https://github.com/decisionintelligence/TFB), encompassing approximately 100 sub-datasets. We apply sliding slices to the original time series using five distinct window and step sizes, as illustrated in the following table. We prioritize slicing the original time series into larger segments. Given the numerous repeating patterns and the limited computational resources, we perform K-means on 10M original time series slices. We categorize them into 1M and 25K groups, randomly selecting one sample from each group to serve as a representative. Consequently, we create a high-quality dataset for continuous pre-training (1M) and instruction fine-tuning (25K).
30
 
 
24
 
25
  ![](architecture.png)
26
 
27
+ ## 💾 Dataset
28
 
29
  The data for continuous pre-training is sourced from two extensive open-source time series repositories, [Monash](https://forecastingdata.org/) and [TFB](https://github.com/decisionintelligence/TFB), encompassing approximately 100 sub-datasets. We apply sliding slices to the original time series using five distinct window and step sizes, as illustrated in the following table. We prioritize slicing the original time series into larger segments. Given the numerous repeating patterns and the limited computational resources, we perform K-means on 10M original time series slices. We categorize them into 1M and 25K groups, randomly selecting one sample from each group to serve as a representative. Consequently, we create a high-quality dataset for continuous pre-training (1M) and instruction fine-tuning (25K).
30