xiyuanz commited on
Commit
50c856f
·
verified ·
1 Parent(s): 4066054

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +114 -1
README.md CHANGED
@@ -8,4 +8,117 @@ tags:
8
  - foundation models
9
  - time series foundation models
10
  - time-series
11
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  - foundation models
9
  - time series foundation models
10
  - time-series
11
+ ---
12
+
13
+ # UniMTS: Unified Pre-training for Motion Time Series
14
+
15
+ 🚀 This is the official implementation of the NeurIPS 2024 paper "UniMTS: Unified Pre-training for Motion Time Series".
16
+
17
+ <p align="center">
18
+ <img src="./unimts.png" alt="Joint Assignment" width="1000" />
19
+ </p>
20
+
21
+ UniMTS is the first unified pre-training procedure for motion time series that generalizes across diverse device latent factors (positions and orientations) and activities. Specifically, we employ a contrastive learning framework that aligns motion time series with text descriptions enriched by large language models. This helps the model learn the semantics of time series to generalize across activities. Given the absence of large-scale motion time series data, we derive and synthesize time series from existing motion skeleton data with all-joint coverage. Spatio-temporal graph networks are utilized to capture the relationships across joints for generalization across different device locations. We further design rotation-invariant augmentation to make the model agnostic to changes in device mounting orientations. UniMTS shows exceptional generalizability across 18 motion time series classification benchmark datasets, outperforming the best baselines by 340% in the zero-shot setting, 16.3% in the few-shot setting, and 9.2% in the full-shot setting.
22
+
23
+ 🤗 We have released the model weights at Hugging Face: https://huggingface.co/xiyuanz/UniMTS
24
+
25
+ Github repo: https://github.com/xiyuanzh/UniMTS
26
+
27
+ ### Evaluation
28
+
29
+ #### Evaluate on 18 Benchmark Datasets
30
+
31
+ All the evaluation data are publicly available as specified in the paper. We prepare fine-tuning and test real data as npy files of shape (number_of_samples, sequence_length, channel_dimension). We also prepare their label descriptions as a json file. For example, the Opportunity dataset has four activities "stand", "walk", "sit", "lie", and the corresponding json file is as follows.
32
+
33
+ Download the processed evaluation data from [Google Drive](https://drive.google.com/file/d/1ybD5Fx6c4ykJiDGLPQlLn0m77z9EkjLb/view?usp=sharing)
34
+
35
+ ```json
36
+ {
37
+ "label_dictionary": {
38
+ "0": ["stand"],
39
+ "1": ["walk"],
40
+ "2": ["sit"],
41
+ "3": ["lie"]
42
+ }
43
+ }
44
+ ```
45
+
46
+ Run the script `evaluate.py` for evaluation.
47
+
48
+ ```sh
49
+ python evaluate.py --batch_size 64
50
+ ```
51
+
52
+ Or directly run the bash file
53
+
54
+ ```sh
55
+ bash run_evaluation.sh
56
+ ```
57
+
58
+ #### Prepare Custom Dataset for Evaluation
59
+
60
+ * Prepare time series as npy files of shape (number_of_samples, sequence_length, channel_dimension). For channel_dimension, follow the order of (acc_x, acc_y, acc_z, gyro_x, gyro_y, gyro_z).
61
+ * Prepare a json file for label descriptions as shown above.
62
+ * Normalize time series measurements ($m/s^2$ for acceleration).
63
+
64
+ run `run_evaluation_custom.sh` as the following example
65
+ ```sh
66
+ python evaluate_custom.py \
67
+ --batch_size 64 \
68
+ --checkpoint './checkpoint/UniMTS.pth' \
69
+ --X_path 'UniMTS_data/TNDA-HAR/X_test.npy' \
70
+ --y_path 'UniMTS_data/TNDA-HAR/y_test.npy' \
71
+ --config_path 'UniMTS_data/TNDA-HAR/TNDA-HAR.json' \
72
+ --joint_list 20 2 21 3 11 \
73
+ --original_sampling_rate 50
74
+ ```
75
+
76
+ * `--original_sampling_rate` specifies the original sampling rate of time series (note: we will only the first 10 seconds during evaluation; padding will be automatically applied if the sequence is shorter than 10 seconds).
77
+ * `--joint_list` specifies the order of joints for the `channel_dimension`. The joint locations are numbered based on the following figure.
78
+
79
+ <p align="center">
80
+ <img src="./joint_assignment.png" alt="Joint Assignment" width="300" />
81
+ </p>
82
+
83
+ ### Fine-tune
84
+
85
+ Fine-tune the model with `args.k` samples for each class (k = 1, 2, 3, 5, 10 for few-shot fine-tuning), as well as all the available samples (full-shot fine-tuning). `args.mode` represents the fine-tuning mode, chosen from `full` (fine-tuning both the graph encoder and the linear classifier), `probe` (linear probe, fine-tuning only the linear classifier), and `random` (training from randomly initialized model).
86
+
87
+ ```sh
88
+ for k in 1 2 3 5 10
89
+ do
90
+ python finetune.py --mode full --k $k --batch_size 64 --num_epochs 200
91
+ done
92
+
93
+ python finetune.py --mode full --batch_size 64 --num_epochs 200
94
+ ```
95
+
96
+ Or directly run the bash file
97
+
98
+ ```sh
99
+ bash run_finetune.sh
100
+ ```
101
+
102
+ ### Pre-training
103
+
104
+ To prepare pre-training datasets:
105
+ 1. Download the motion skeletons and paired textual descriptions from [HumanML3D](https://github.com/EricGuo5513/HumanML3D)
106
+ 2. Convert motion skeletions into BVH files. Run `pos2bvh.py` under the root directory of [inverse kinematics](https://github.com/sigal-raab/Motion)
107
+ 3. Derive and synthesize motion time series from BVH files. Run `bvh2ts.py` under the root directory of [IMUSim](https://github.com/martinling/imusim)
108
+ 4. Run `python text_aug.py` to further enrich the HumanML3D text descriptions through large language models.
109
+
110
+ Run the script `pretrain.py` for pre-training. `args.aug` represents using rotation-invariant augmentation (set to 1) or not (set to 0) during pre-training.
111
+
112
+ ```sh
113
+ python pretrain.py --aug 1 --batch_size 64
114
+ ```
115
+
116
+ Or directly run the bash file
117
+
118
+ ```sh
119
+ bash run_pretrain.sh
120
+ ```
121
+
122
+ ### Citation
123
+
124
+ If you find our work helpful, please cite the following paper