Back to Search
Start Over
Using cross‐validation methods to select time series models: Promises and pitfalls.
- Source :
-
British Journal of Mathematical & Statistical Psychology . May2024, Vol. 77 Issue 2, p337-355. 19p. - Publication Year :
- 2024
-
Abstract
- Vector autoregressive (VAR) modelling is widely employed in psychology for time series analyses of dynamic processes. However, the typically short time series in psychological studies can lead to overfitting of VAR models, impairing their predictive ability on unseen samples. Cross‐validation (CV) methods are commonly recommended for assessing the predictive ability of statistical models. However, it is unclear how the performance of CV is affected by characteristics of time series data and the fitted models. In this simulation study, we examine the ability of two CV methods, namely,10‐fold CV and blocked CV, in estimating the prediction errors of three time series models with increasing complexity (person‐mean, AR, and VAR), and evaluate how their performance is affected by data characteristics. We then compare these CV methods to the traditional methods using the Akaike (AIC) and Bayesian (BIC) information criteria in their accuracy of selecting the most predictive models. We find that CV methods tend to underestimate prediction errors of simpler models, but overestimate prediction errors of VAR models, particularly when the number of observations is small. Nonetheless, CV methods, especially blocked CV, generally outperform the AIC and BIC. We conclude our study with a discussion on the implications of the findings and provide helpful guidelines for practice. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 00071102
- Volume :
- 77
- Issue :
- 2
- Database :
- Academic Search Index
- Journal :
- British Journal of Mathematical & Statistical Psychology
- Publication Type :
- Academic Journal
- Accession number :
- 176535698
- Full Text :
- https://doi.org/10.1111/bmsp.12330