Regularized Estimation in High Dimensional Time Series under Mixing Conditions.

arXiv: Machine Learning(2016)

引用 24|浏览15
暂无评分
摘要
The Lasso is one of the most popular methods in high dimensional statistical learning. Most existing theoretical results for the Lasso, however, require the samples to be iid. Recent work has provided guarantees for the Lasso assuming that the time series is generated by a sparse Vector Auto-Regressive (VAR) model with Gaussian innovations. Proofs of these results rely critically on the fact that the true data generating mechanism (DGM) is a finite-order Gaussian VAR. This assumption is quite brittle: linear transformations, including selecting a subset of variables, can lead to the violation of this assumption. In order to break free from such assumptions, we derive nonasymptotic inequalities for estimation error and prediction error of the Lasso estimate of the best linear predictor without assuming any special parametric form of the DGM. Instead, we rely only on (strict) stationarity and mixing conditions to establish consistency of the Lasso in the following two scenarios: (a) alpha-mixing Gaussian processes, and (b) beta-mixing sub-Gaussian random vectors. Our work provides an alternative proof of the consistency of the Lasso for sparse Gaussian VAR models. But the applicability of our results extends to non-Gaussian and non-linear times series models as the examples we provide demonstrate. In order to prove our results, we derive a novel Hanson-Wright type concentration inequality for beta-mixing sub-Gaussian random vectors that may be of independent interest.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要