Back to Search Start Over

A Standardized Benchmarking Framework to Assess Downscaled Precipitation Simulations.

Authors :
Isphording, Rachael N.
Alexander, Lisa V.
Bador, Margot
Green, Donna
Evans, Jason P.
Wales, Scott
Source :
Journal of Climate; Feb2024, Vol. 37 Issue 4, p1089-1110, 22p
Publication Year :
2024

Abstract

Presently, there is no standardized framework or metrics identified to assess regional climate model precipitation output. Because of this, it can be difficult to make a one-to-one comparison of their performance between regions or studies, or against coarser-resolution global climate models. To address this, we introduce the first steps toward establishing a dynamic, yet standardized, benchmarking framework that can be used to assess model skill in simulating various characteristics of rainfall. Benchmarking differs from typical model evaluation in that it requires that performance expectations are set a priori. This framework has innumerable applications to underpin scientific studies that assess model performance, inform model development priorities, and aid stakeholder decision-making by providing a structured methodology to identify fit-for-purpose model simulations for climate risk assessments and adaptation strategies. While this framework can be applied to regional climate model simulations at any spatial domain, we demonstrate its effectiveness over Australia using high-resolution, 0.5° × 0.5° simulations from the CORDEX-Australasia ensemble. We provide recommendations for selecting metrics and pragmatic benchmarking thresholds depending on the application of the framework. This includes a top tier of minimum standard metrics to establish a minimum benchmarking standard for ongoing climate model assessment. We present multiple applications of the framework using feedback received from potential user communities and encourage the scientific and user community to build on this framework by tailoring benchmarks and incorporating additional metrics specific to their application. Significance Statement: We introduce a standardized benchmarking framework for assessing the skill of regional climate models in simulating precipitation. This framework addresses the lack of a uniform approach in the scientific community and has diverse applications in scientific research, model development, and societal decision-making. We define a set of minimum standard metrics to underpin ongoing climate model assessments that quantify model skill in simulating fundamental characteristics of rainfall. We provide guidance for selecting metrics and defining benchmarking thresholds, demonstrated using multiple case studies over Australia. This framework has broad applications for numerous user communities and provides a structured methodology for the assessment of model performance. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
08948755
Volume :
37
Issue :
4
Database :
Complementary Index
Journal :
Journal of Climate
Publication Type :
Academic Journal
Accession number :
175345016
Full Text :
https://doi.org/10.1175/JCLI-D-23-0317.1