1. Sparse Identification and Estimation of Large-Scale Vector AutoRegressive Moving Averages
- Author
-
Sumanta Basu, Ines Wilms, David S. Matteson, Jacob Bien, RS: FSE DACS Mathematics Centre Maastricht, RS: GSBE other - not theme-related research, and QE Econometrics
- Subjects
Statistics and Probability ,FOS: Computer and information sciences ,Multivariate statistics ,Scale (ratio) ,Computer science ,forecasting ,FORMS ,01 natural sciences ,Methodology (stat.ME) ,010104 statistics & probability ,Moving average ,0502 economics and business ,Econometrics ,Statistics::Methodology ,Autoregressive–moving-average model ,Identifiability ,NETWORK ,0101 mathematics ,Statistics - Methodology ,050205 econometrics ,Series (mathematics) ,multivariate time series ,05 social sciences ,16. Peace & justice ,Identification (information) ,Autoregressive model ,LINEAR-MODELS ,REGULARIZATION ,VARMA ,Statistics, Probability and Uncertainty ,Sparse estimation - Abstract
The Vector AutoRegressive Moving Average (VARMA) model is fundamental to the theory of multivariate time series; however, identifiability issues have led practitioners to abandon it in favor of the simpler but more restrictive Vector AutoRegressive (VAR) model. We narrow this gap with a new optimization-based approach to VARMA identification built upon the principle of parsimony. Among all equivalent data-generating models, we use convex optimization to seek the parameterization that is "simplest" in a certain sense. A user-specified strongly convex penalty is used to measure model simplicity, and that same penalty is then used to define an estimator that can be efficiently computed. We establish consistency of our estimators in a double-asymptotic regime. Our non-asymptotic error bound analysis accommodates both model specification and parameter estimation steps, a feature that is crucial for studying large-scale VARMA algorithms. Our analysis also provides new results on penalized estimation of infinite-order VAR, and elastic net regression under a singular covariance structure of regressors, which may be of independent interest. We illustrate the advantage of our method over VAR alternatives on three real data examples.
- Published
- 2023