1. Explainable Artificial Intelligence for Bayesian Neural Networks: Toward Trustworthy Predictions of Ocean Dynamics.
- Author
-
Clare, Mariana C. A., Sonnewald, Maike, Lguensat, Redouane, Deshayes, Julie, and Balaji, V.
- Subjects
- *
BAYESIAN analysis , *OCEAN dynamics , *ARTIFICIAL intelligence , *TRUST , *FORECASTING - Abstract
The trustworthiness of neural networks is often challenged because they lack the ability to express uncertainty and explain their skill. This can be problematic given the increasing use of neural networks in high stakes decision‐making such as in climate change applications. We address both issues by successfully implementing a Bayesian Neural Network (BNN), where parameters are distributions rather than deterministic, and applying novel implementations of explainable AI (XAI) techniques. The uncertainty analysis from the BNN provides a comprehensive overview of the prediction more suited to practitioners' needs than predictions from a classical neural network. Using a BNN means we can calculate the entropy (i.e., uncertainty) of the predictions and determine if the probability of an outcome is statistically significant. To enhance trustworthiness, we also spatially apply the two XAI techniques of Layer‐wise Relevance Propagation (LRP) and SHapley Additive exPlanation (SHAP) values. These XAI methods reveal the extent to which the BNN is suitable and/or trustworthy. Using two techniques gives a more holistic view of BNN skill and its uncertainty, as LRP considers neural network parameters, whereas SHAP considers changes to outputs. We verify these techniques using comparison with intuition from physical theory. The differences in explanation identify potential areas where new physical theory guided studies are needed. Plain Language Summary: Understanding ocean dynamics and how they are affected by global heating is crucial for understanding climate change impacts. Neural networks are ideally suited to this problem, but do not explain how they make predictions nor express how certain they are of the predictions' accuracy, which considerably limits their trustworthiness for ocean science problems. Here, we address both issues by using a "Bayesian Neural Network" (BNN), which directly expresses prediction uncertainty, and applying explainable AI (XAI) techniques to explain how the BNN arrives at its prediction. The BNN provides a comprehensive overview more suited to addressing the core problem than that provided by classical neural networks. We also apply two XAI techniques (SHAP and LRP) to the BNN and evaluate their trustworthiness by comparing the similarities and differences between their explanations with intuition from physical theory. Any differences offer an opportunity to develop physical theory guided by what the BNN considers important. Key Points: Novel use of a Bayesian Neural Network (BNN) to quantify uncertainty in ocean dynamical regime classifications, giving a holistic predictionExplaining the skill of a BNN using two techniques originating from two different classes of explainable AI: SHapley Additive exPlanation (SHAP) and Layer‐wise Relevance Propagation (LRP)Trustworthiness is evaluated by comparing similarities and differences between SHAP and LRP explanations with intuition from physical theory [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF