File size: 3,380 Bytes
94c1984
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52fe6c7
94c1984
cf6cdec
94c1984
 
 
ee73185
94c1984
423b3ad
94c1984
 
 
 
 
 
 
 
 
cf6cdec
62d6544
94c1984
 
 
 
 
 
 
 
f93eb3c
 
94c1984
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
---
license: apache-2.0
task_categories:
- time-series-forecasting
tags:
- time-series
- multimodality
- pretrained-model
- foundation-model
- multimodal-time-series-foundation-model
size_categories:
- 100K<n<1M
---

# ChatTime: A Multimodal Time Series Foundation Model

## ✨ Introduction

In this paper, we innovatively model time series as a foreign language and construct ChatTime, a unified framework for time series and text processing. As an out-of-the-box multimodal time series foundation model, ChatTime provides zero-shot forecasting capability and supports bimodal input/output for both time series and text. We design a series of experiments to verify the superior performance of ChatTime across multiple tasks and scenarios, and create four multimodal datasets to address data gaps. The experimental results demonstrate the potential and utility of ChatTime.

As depicted in Figure 1(b), during the continuous pre-training stage, we pre-train [LLaMA-2-7B-Base](https://huggingface.co/meta-llama/Llama-2-7b-hf) on [ChengsenWang/ChatTime-1-Pretrain-1M](https://huggingface.co/datasets/ChengsenWang/ChatTime-1-Pretrain-1M), yielding [ChengsenWang/ChatTime-1-7B-Base](https://huggingface.co/ChengsenWang/ChatTime-1-7B-Base).

For details on ChatTime models, training data and procedures, and experimental results, please refer to the [arXiv](https://arxiv.org/abs/2412.11376).

![](architecture.png)

## 💾 Dataset

The data for continuous pre-training is sourced from two extensive open-source time series repositories, [Monash](https://forecastingdata.org/) and [TFB](https://github.com/decisionintelligence/TFB), encompassing approximately 100 sub-datasets. We apply sliding slices to the original time series using five distinct window and step sizes, as illustrated in the following table. We prioritize slicing the original time series into larger segments. Given the numerous repeating patterns and the limited computational resources, we perform K-means on 10M original time series slices. We categorize them into 1M and 25K groups, randomly selecting one sample from each group to serve as a representative. Consequently, we create a high-quality dataset for continuous pre-training (1M) and instruction fine-tuning (25K).

| Window Size | History Length | Prediction Length | Sliding Step |
| :---------: | :------------: | :---------------: | :----------: |
|     576     |      512       |        64         |      32      |
|     288     |      256       |        32         |      16      |
|     144     |      128       |        16         |      8       |
|     72      |       64       |         8         |      4       |
|     36      |       32       |         4         |      2       |

For details on pre-training dataset, please refer to the [arXiv](https://arxiv.org/abs/2412.11376).

## 📝 Citation

If you find this repo or our work useful for your research, please consider citing the paper:

```tex
@inproceedings{
  author    = {Chengsen Wang and Qi Qi and Jingyu Wang and Haifeng Sun and Zirui Zhuang and Jinming Wu and Lei Zhang and Jianxin Liao},
  title     = {ChatTime: A Unified Multimodal Time Series Foundation Model Bridging Numerical and Textual Data},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
}
```

## 📪 Contact

If you have any question, please contact [[email protected]]().