Back to Search Start Over

Improving the Reliability of ML‐Corrected Climate Models With Novelty Detection.

Authors :
Sanford, Clayton
Kwa, Anna
Watt‐Meyer, Oliver
Clark, Spencer K.
Brenowitz, Noah
McGibbon, Jeremy
Bretherton, Christopher
Source :
Journal of Advances in Modeling Earth Systems; Nov2023, Vol. 15 Issue 11, p1-14, 14p
Publication Year :
2023

Abstract

Using machine learning (ML) for the online correction of coarse‐resolution atmospheric models has proven effective in reducing biases in near‐surface temperature and precipitation rate. However, ML corrections often introduce new biases in the upper atmosphere and causes inconsistent model performance across different random seeds. Furthermore, they produce profiles that are outside the distribution of samples used in training, which can interfere with the baseline physics of the atmospheric model and reduce model reliability. This study introduces the use of a novelty detector to mask ML corrections when the atmospheric state is deemed out‐of‐sample. The novelty detector is trained on profiles of temperature and specific humidity in a semi‐supervised fashion using samples from the coarsened reference fine‐resolution simulation. The novelty detector responds to particularly biased simulations relative to the reference simulation by categorizing more columns as out‐of‐sample. Without novelty detection, corrective ML occasionally causes undesirably large climate biases. When coupled to a running year‐long coarse‐grid simulation, novelty detection deems about 21% of columns to be novelties. This identification reduces the spread in the root‐mean‐square error (RMSE) of time‐mean spatial patterns of surface temperature and precipitation rate across a random seed ensemble. In particular, the random seed with the worst RMSE is improved by up to 60% (depending on the variable) while the best seed maintains its low RMSE. By reducing the variance in quality of ML‐corrected climate models, novelty detection offers reliability without compromising prediction quality in atmospheric models. Plain Language Summary: Fine‐grid global storm‐resolving models produce more accurate rainfall and temperature forecasts than coarse‐grid climate models, but are too computationally expensive to run for many years. Corrective machine learning (ML) can help coarse‐grid climate models act more like fine‐grid models, but also makes them more vulnerable to inputs lying outside the range of training data for the ML algorithm. For such "out‐of‐sample" inputs, the ML may give unreliable results. Using a separate ML scheme, we identify out‐of‐sample data and disable the ML correction for these cases. We find that this robustly improves the time‐mean temperature and precipitation patterns predicted by ML‐corrected climate simulations to be 30%–50% better than similar simulations without ML. Incorporating novelty detectors into ML‐corrected simulations can improve their prediction skill by helping them avoid drifting into "out‐of‐sample" states. Key Points: A novelty detector is trained to classify whether an atmospheric profile belongs to a high‐resolution model's distribution of profilesThe detector classifies more profiles as novelties in machine‐learning corrected simulations that drift further from a reference simulationUsing the detector to turn off corrections for novel profiles leads to corrected simulations with more consistent and less biased climates [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
19422466
Volume :
15
Issue :
11
Database :
Complementary Index
Journal :
Journal of Advances in Modeling Earth Systems
Publication Type :
Academic Journal
Accession number :
173892845
Full Text :
https://doi.org/10.1029/2023MS003809