Chrome Extension
WeChat Mini Program
Use on ChatGLM

VQ-TR: Vector Quantized Attention for Time Series Forecasting

ICLR 2023(2023)

Cited 0|Views12
No score
Abstract
Modern time series datasets can easily contain hundreds or thousands of temporal time points, however, Transformer based models scale poorly to the size of the sequence length constraining their context size in the seq-to-seq setting. In this work, we introduce VQ-TR which maps large sequences to a discrete set of latents representations as part of the Attention module. This allows us to attend over larger context windows with linear complexity with respect to the sequence length. We compare this method with other competitive deep learning and classical univariate probabilistic models and highlight its performance using both probabilistic and point forecasting metrics on a variety of open datasets from different domains.
More
Translated text
Key words
deep learning,time series forecasting,latent variable models,transformer
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined