1. Modelling seismicity as a spatio-temporal point process using inlabru
- Author
-
Serafini, Francesco, Naylor, Mark, Lindgren, Finn, Main, Ian, and Butler, Ian
- Subjects
earthquake prediction ,Probabilistic seismic hazard analysis ,PSHA ,Operational Earthquake Forecasting ,OEF ,stress accumulation ,statistical models ,ETAS ,Epidemic-Type Aftershocks Sequence ,GPS measuring ,Integrated Nested Laplace Approximation ,Bayesian inference ,Parimutuel Gambling scoring ,probability - Abstract
Reliable deterministic prediction of earthquake occurrence is not possible at present, and may never be. In the absence of a reliable deterministic model, we need alternate strategies to manage the seismic hazard or the risk. This involves making statements of the likelihood or earthquake occurrence in space and time, including a fair and accurate description of the uncertainty around statements used in operational decision-making. Probabilistic Seismic Hazard Analysis (PSHA) and Operational Earthquake Forecasting (OEF) have the role of providing probabilistic statements on the hazard associated with earthquakes on long-term (decades to centuries) and short-term (days to decades) time frames respectively. Both PSHA and OEF rely on a source model able to describe the occurrence of earthquakes. PSHA models are commonly modelled using a spatially-variable Poisson process to describe earthquake occurrence. Therefore, they are calibrated on declustered catalogues which retains only the largest earthquakes in a sequence. OEF models, on the other hand, are commonly time-dependent models which describes the occurrence of all the events above a certain magnitude threshold including dependent events such as aftershocks or swarms. They are calibrated on the full earthquake catalogue and provide accurate descriptions of the clustering process and the time-evolution of earthquake sequences. The Epidemic-Type Aftershock Sequence (ETAS) model is the most commonly used model as time-dependent seismicity model and belongs to the general class of Hawkes (or self-exciting) processes. Under the ETAS model, any earthquake in the sequence has the ability of inducing (or triggering) its own subsequence of earthquakes in a cascade of events, as commonly observed in nature. The earthquake catalogue is then the union of a set of events occurring independently from each other (background events) and a set of events which have been induced or triggered by another (aftershocks). The reliability of PSHA or OEF strategies depends upon the reliability of the source model used to describe earthquake occurrence. In order to improve the source model, we need the ability to (a) incorporate hypotheses on earthquake occurrence in a model, and (b) validate the model against observed data. Both tasks are problematic. Indeed, the complex mathematical form of the ETAS model requires ad-hoc methodologies to perform inference on the model parameters. These methodologies then need further modification if the classical ETAS model is adjusted to introduce new hypotheses. Comparing forecasts produced by models incorporating different hypotheses which are and calibrated with different methods is problematic because it is difficult (if not impossible) to determine where the differences in the forecasts are coming from. Therefore, a unique framework capable of supporting ETAS models incorporating different hypotheses would be beneficial. Similarly, the validation step has to be done on models calibrated on the same data and producing forecasts for the same spatio-temporal region. Moreover the validation must ultimately be done against future data, unknown in the moment in which the forecasts are produced, to ensure that no information about the data used to validate the models is incorporated in the models themselves. Hence, the Collaboratory for the Study of Earthquake Predictability (CSEP) has been founded with the role of gathering forecasting models and running fully-prospective forecasting experiments in an open environment. CSEP ensures that the models are validated fairly and using a set of community-agreed metrics which measure the agreement between forecasts and data on the outcomes. In this thesis, I present and apply a new Bayesian approximation technique for Hawkes process models (including ETAS). I also demonstrate the importance of one of the statistical properties that scores used to rank competing forecasts need to have in order to provide trustworthy results. The Bayesian framework allows an accurate description of the uncertainty around model parameters which can then be propagated to any quantity of interest. In the context of Bayesian statistics, the most commonly used techniques to perform inference are Markov Chain Monte Carlo (MCMC) techniques which are sampling-based methods. Instead, I use the Integrated Nested Laplace Approximation (INLA) to provide a deterministic approximation of the parameter posterior distribution instead of the random sampling. INLA is faster than MCMC for problems involving a large number of correlated parameters and offers an alternative way to implement complex statistical models which are infeasible (from a computational point of view) with MCMC. This provides researchers and practitioners with a statistical framework to formulate ETAS models incorporating different hypotheses, produce forecasts that accounts for uncertainty, and test them using CSEP procedures. I build on the work done to implement time-independent models for seismicity with INLA which provided a framework to study the effect of covariates such as depth, GPS displacement, heatflow, strain rate, and distance to the nearest fault but lacked the ability to describe the clustering process of earthquakes. I show that this work can be extended to include time-dependent Hawkes process models and run in a reasonable computational time using INLA. In this framework, the information from covariates can be incorporated both in modelling the rate of background events, and in modelling the number aftershocks. This resembles how information on covariates is incorporated in Generalized Linear Models (GLMs) which are widely used to study the effect of covariates on a range of phenomena. Indeed, this work offers a way to borrow ideas and techniques used with GLMs and apply them to seismicity analyses. To make the proposed technique widely accessible, I have developed a new R-package called ETAS.inlabru which offers user-friendly access to the proposed methodology. The ETAS.inlabru package is based on the inlabru R-package which offers access to the INLA methodology. In this thesis, I compared our approach with the MCMC technique implemented through the bayesianETAS package and shows that ETAS.inlabru provides similar results to bayesianETAS, but it is faster, scales more efficiently increasing the amount of data, and can support a wider range of ETAS models, specifically those involving multiple covariates. I believe that this work provides users with a reliable Bayesian framework for the ETAS model alleviating the burden of modifying/coding their own optimization routines and allowing more flexibility in the range of hypotheses that can be incorporated and validated. In this thesis, I have analysed the 2009 L'Aquila and 2016 Amatrice seismic sequences occurred in central Italy and found that the depth of the events have a negative effect on the aftershock productivity, and that models involving covariates show a better fit to the data than the classical ETAS model. On the statistical properties that scores needs to posses to provide trustworthy rankings of competing forecasts, I focus on the notion of proper scores. I show that the Parimutuel Gambling (PG) score, used to rank forecasts in previous CSEP experiments, has been used in situations in which is not proper. Indeed, I demonstrate that the PG score is proper only in a specific situation and improper in general. I compare its performances with two proper alternatives: the Brier and the Logarithmic (Log) scores. The simulation procedure employed for this part of the thesis can be easily adapted to study the properties of other validation procedures as the ones used in CSEP or to determine important quantities for the experimental design such as the amount of data with which the comparison should be performed. This contributes to the wider discussion on the statistical properties of CSEP tests, and is an additional step in determining sanity-checks that scoring rules have to pass before being used to validate earthquake forecasts in CSEP experiments.
- Published
- 2023
- Full Text
- View/download PDF