Back to Search
Start Over
Dynamic updating of clinical survival prediction models in a changing environment
- Source :
- Diagnostic and Prognostic Research, Vol 7, Iss 1, Pp 1-14 (2023)
- Publication Year :
- 2023
- Publisher :
- BMC, 2023.
-
Abstract
- Abstract Background Over time, the performance of clinical prediction models may deteriorate due to changes in clinical management, data quality, disease risk and/or patient mix. Such prediction models must be updated in order to remain useful. In this study, we investigate dynamic model updating of clinical survival prediction models. In contrast to discrete or one-time updating, dynamic updating refers to a repeated process for updating a prediction model with new data. We aim to extend previous research which focused largely on binary outcome prediction models by concentrating on time-to-event outcomes. We were motivated by the rapidly changing environment seen during the COVID-19 pandemic where mortality rates changed over time and new treatments and vaccines were introduced. Methods We illustrate three methods for dynamic model updating: Bayesian dynamic updating, recalibration, and full refitting. We use a simulation study to compare performance in a range of scenarios including changing mortality rates, predictors with low prevalence and the introduction of a new treatment. Next, the updating strategies were applied to a model for predicting 70-day COVID-19-related mortality using patient data from QResearch, an electronic health records database from general practices in the UK. Results In simulated scenarios with mortality rates changing over time, all updating methods resulted in better calibration than not updating. Moreover, dynamic updating outperformed ad hoc updating. In the simulation scenario with a new predictor and a small updating dataset, Bayesian updating improved the C-index over not updating and refitting. In the motivating example with a rare outcome, no single updating method offered the best performance. Conclusions We found that a dynamic updating process outperformed one-time discrete updating in the simulations. Bayesian updating offered good performance overall, even in scenarios with new predictors and few events. Intercept recalibration was effective in scenarios with smaller sample size and changing baseline hazard. Refitting performance depended on sample size and produced abrupt changes in hazard ratio estimates between periods.
Details
- Language :
- English
- ISSN :
- 23977523
- Volume :
- 7
- Issue :
- 1
- Database :
- Directory of Open Access Journals
- Journal :
- Diagnostic and Prognostic Research
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.50c5e01888bc4ade89fcc7b33e7598a9
- Document Type :
- article
- Full Text :
- https://doi.org/10.1186/s41512-023-00163-z