Timer: Transformers for Time Series Analysis at Scale
CoRR(2024)
摘要
Deep learning has contributed remarkably to the advancement of time series
analysis. Still, deep models can encounter performance bottlenecks in
real-world small-sample scenarios, which can be concealed due to the
performance saturation with small models on current benchmarks. Meanwhile,
large models have demonstrated great powers in these scenarios through
large-scale pre-training. Continuous progresses have been achieved as the
emergence of large language models, exhibiting unprecedented ability in
few-shot generalization, scalability, and task generality, which is however
absent in time series models. To change the current practices of training small
models on specific datasets from scratch, this paper aims at an early
development of large time series models (LTSM). During pre-training, we curate
large-scale datasets with up to 1 billion time points, unify heterogeneous time
series into single-series sequence (S3) format, and develop the GPT-style
architecture toward LTSMs. To meet diverse application needs, we convert
forecasting, imputation, and anomaly detection of time series into a unified
generative task. The outcome of this study is a Time Series Transformer
(Timer), that is pre-trained by autoregressive next token prediction on large
multi-domain datasets, and is fine-tuned to downstream scenarios with promising
abilities as an LTSM.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要