14 results on '"Doherty, John A."'
Search Results
2. Probabilistic Contaminant Source Assessment—Getting the Most Out of Field Measurements.
- Author
-
Hugman, Rui, Lotti, Francesca, and Doherty, John
- Subjects
PARTICLE tracks (Nuclear physics) ,GROUNDWATER ,ELECTRONIC data processing ,POLLUTANTS - Abstract
This paper describes a methodology for undertaking probabilistic investigations into the locations at which contaminants have leaked into a groundwater system. The methodology is built with highly parameterized, stochastic history‐matching in mind. It is able to reduce uncertainties associated with estimates of subsurface hydraulic properties at the same time as it reduces uncertainties associated with inferences of contaminant sources. Particles are used to simulate contaminant movement. This reduces simulator execution time while increasing simulator stability. Borehole measurements of groundwater chemistry are endowed with a binary classification that indicates the presence, or otherwise, of a contaminant plume. This classification is transferred to passing particles as a detect or nondetect status awarded to their trajectories. Because this status is continuous with respect to model parameters, the latter can be adjusted in order to ensure that the same trajectory cannot possess both a detect status and a nondetect status. Particle trajectory statuses can be assigned to model cells from which they are released. By calculating cell statistics using a large number of history‐match‐constrained, stochastic parameter fields, probability maps can be drawn. We illustrate two of these. The first maps the probability that a contaminant sourced at a particular location will go undetected by the current observation network. The second maps the probability that a contaminant source cannot exist at a particular location. The method is extended to examine the worth of supplementing an existing observation network with new wells. Article impact statement: A new methodology for model‐based processing of groundwater data in order to assess contaminant source location. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Water supply security for the township of Biggenden : A GMDSI worked example report
- Author
-
Gallagher, Mark and Doherty, John
- Subjects
Water supply ,GMDSI ,Water security ,Groundwater modelling ,Groundwater ,Biggenden - Abstract
The Groundwater Modelling Decision Support Initiative (GMDSI) is an industry-funded and industry-aligned project focused on improving the role that groundwater modelling plays in supporting environmental management and decision-making. Over the life of the project, it will document a number of examples of decision-support groundwater modelling. These documented worked examples will attempt to demonstrate that by following the scientific method, and by employing modern, computer-based approaches to data assimilation, the uncertainties associated with groundwater model predictions can be both quantified and reduced. With realistic confidence intervals associated with predictions of management interest, the risks associated with different courses of management action can be properly assessed before critical decisions are made. GMDSI worked example reports, one of which you are now reading, are deliberately different from other modelling reports. They do not describe all of the nuances of a particular study site. They do not provide every construction and deployment detail of a particular model. In fact, they are not written for modelling specialists at all. Instead, a GMDSI worked example report is written with a broader audience in mind. Its intention is to convey concepts, rather than to record details of model construction. In doing so, it attempts to raise its readers’ awareness of modelling and data-assimilation possibilities that may prove useful in their own modelling and management contexts.
- Published
- 2020
- Full Text
- View/download PDF
4. Gradient-based model calibration with proxy-model assistance.
- Author
-
Burrows, Wesley and Doherty, John
- Subjects
- *
CALIBRATION , *GROUNDWATER recharge , *PARAMETER estimation , *FINITE differences , *MATHEMATICAL models , *GROUNDWATER - Abstract
Summary Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
5. Efficient Calibration/Uncertainty Analysis Using Paired Complex/Surrogate Models.
- Author
-
Burrows, Wesley and Doherty, John
- Subjects
- *
GROUNDWATER , *HYDRAULIC conductivity , *SALTWATER encroachment , *STOCHASTIC convergence , *JACOBIAN matrices , *CONJUGATE gradient methods - Abstract
The use of detailed groundwater models to simulate complex environmental processes can be hampered by (1) long run-times and (2) a penchant for solution convergence problems. Collectively, these can undermine the ability of a modeler to reduce and quantify predictive uncertainty, and therefore limit the use of such detailed models in the decision-making context. We explain and demonstrate a novel approach to calibration and the exploration of posterior predictive uncertainty, of a complex model, that can overcome these problems in many modelling contexts. The methodology relies on conjunctive use of a simplified surrogate version of the complex model in combination with the complex model itself. The methodology employs gradient-based subspace analysis and is thus readily adapted for use in highly parameterized contexts. In its most basic form, one or more surrogate models are used for calculation of the partial derivatives that collectively comprise the Jacobian matrix. Meanwhile, testing of parameter upgrades and the making of predictions is done by the original complex model. The methodology is demonstrated using a density-dependent seawater intrusion model in which the model domain is characterized by a heterogeneous distribution of hydraulic conductivity. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
6. Quantifying the predictive consequences of model error with linear subspace analysis.
- Author
-
White, Jeremy T., Doherty, John E., and Hughes, Joseph D.
- Subjects
LINEAR statistical models ,ERRORS ,SUBSPACES (Mathematics) ,CALIBRATION ,COMPUTER simulation ,GROUNDWATER - Abstract
All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
7. Groundwater modelling in decision support: reflections on a unified conceptual framework.
- Author
-
Doherty, John and Simmons, Craig
- Subjects
- *
PARAMETERIZATION , *GROUNDWATER , *DECISION making , *HYDROLOGY , *AQUIFERS - Abstract
Groundwater models are commonly used as basis for environmental decision-making. There has been discussion and debate in recent times regarding the issue of model simplicity and complexity. This paper contributes to this ongoing discourse. The selection of an appropriate level of model structural and parameterization complexity is not a simple matter. Although the metrics on which such selection should be based are simple, there are many competing, and often unquantifiable, considerations which must be taken into account as these metrics are applied. A unified conceptual framework is introduced and described which is intended to underpin groundwater modelling in decision support with a direct focus on matters regarding model simplicity and complexity. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
8. Optimization and uncertainty assessment of strongly nonlinear groundwater models with high parameter dimensionality.
- Author
-
Keating, Elizabeth H., Doherty, John, Vrugt, Jasper A., and Kang, Qinjun
- Subjects
GROUNDWATER ,AQUIFERS ,PARAMETER estimation ,MONTE Carlo method ,DIFFERENTIAL evolution ,BAYESIAN analysis - Abstract
Highly parameterized and CPU-intensive groundwater models are increasingly being used to understand and predict flow and transport through aquifers. Despite their frequent use, these models pose significant challenges for parameter estimation and predictive uncertainty analysis algorithms, particularly global methods which usually require very large numbers of forward runs. Here we present a general methodology for parameter estimation and uncertainty analysis that can be utilized in these situations. Our proposed method includes extraction of a surrogate model that mimics key characteristics of a full process model, followed by testing and implementation of a pragmatic uncertainty analysis technique, called null-space Monte Carlo (NSMC), that merges the strengths of gradient-based search and parameter dimensionality reduction. As part of the surrogate model analysis, the results of NSMC are compared with a formal Bayesian approach using the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. Such a comparison has never been accomplished before, especially in the context of high parameter dimensionality. Despite the highly nonlinear nature of the inverse problem, the existence of multiple local minima, and the relatively large parameter dimensionality, both methods performed well and results compare favorably with each other. Experiences gained from the surrogate model analysis are then transferred to calibrate the full highly parameterized and CPU intensive groundwater model and to explore predictive uncertainty of predictions made by that model. The methodology presented here is generally applicable to any highly parameterized and CPU-intensive environmental model, where efficient methods such as NSMC provide the only practical means for conducting predictive uncertainty analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
9. Efficient regularization and uncertainty analysis using a global optimization methodology.
- Author
-
Moore, Catherine, Wöhling, Thomas, and Doherty, John
- Subjects
GLOBAL optimization ,PARETO analysis ,PARETO principle ,HYDROLOGIC models ,GROUNDWATER - Abstract
The concept of the Pareto front has received considerable attention in the model calibration literature, particularly in conjunction with global search optimizers that have been developed for use in contexts where objective function surfaces are pitted with local optima or characterized by multiple broad regions of attraction in parameter space. In this paper, use of the Pareto concept in such calibration contexts is extended to include regularization and model predictive uncertainty analysis. Both of these processes can be formulated as constrained optimization problems in which a trade-off is analyzed between a set of constraints on model parameters on the one hand and maximization/minimization of one or a number of model outputs of interest on the other hand. In both cases, the optimal trade-off point, though being calculable on a theoretical basis for synthetic cases, must be chosen subjectively when working with real-world models. Two cases are presented to illustrate the methodology: one a synthetic groundwater model and the other a real-world surface water model. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
10. Using a Cloud to Replenish Parched Groundwater Modeling Efforts.
- Author
-
Hunt, Randall J., Luchette, Joseph, Schreuder, Willem A., Rumbaugh, James O., Doherty, John, Tonkin, Matthew J., and Rumbaugh, Douglas B.
- Subjects
GROUNDWATER ,HYDROGEOLOGY ,CLOUD computing ,INTERNET ,COST effectiveness - Abstract
Groundwater models can be improved by introduction of additional parameter flexibility and simultaneous use of soft-knowledge. However, these sophisticated approaches have high computational requirements. Cloud computing provides unprecedented access to computing power via the Internet to facilitate the use of these techniques. A modeler can create, launch, and terminate “virtual” computers as needed, paying by the hour, and save machine images for future use. Such cost-effective and flexible computing power empowers groundwater modelers to routinely perform model calibration and uncertainty analysis in ways not previously possible. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
11. Predictive error dependencies when using pilot points and singular value decomposition in groundwater model calibration
- Author
-
Christensen, Steen and Doherty, John
- Subjects
- *
ERROR analysis in mathematics , *DECOMPOSITION method , *GROUNDWATER , *CALIBRATION - Abstract
Abstract: A significant practical problem with the pilot point method is to choose the location of the pilot points. We present a method that is intended to relieve the modeler from much of this responsibility. The basic idea is that a very large number of pilot points are distributed more or less uniformly over the model area. Singular value decomposition (SVD) of the (possibly weighted) sensitivity matrix of the pilot point based model produces eigenvectors of which we pick a small number corresponding to significant eigenvalues. Super parameters are defined as factors through which parameter combinations corresponding to the chosen eigenvectors are multiplied to obtain the pilot point values. The model can thus be transformed from having many-pilot-point parameters to having a few super parameters that can be estimated by nonlinear regression on the basis of the available observations. (This technique can be used for any highly parameterized groundwater model, not only for models parameterized by the pilot point method.) A synthetic model is used to test and demonstrate the application of the method for a case with a highly heterogeneous log-transmissivity field to be estimated from a limited number of hydraulic head observations. It is shown that the method produces a smoothly varying spatial parameter field, and that the fit of the estimated log-transmissivity field to the real field varies with the parameterization specification (i.e. the density of pilot points and the number of estimated super parameters), and that the structural errors caused by using pilot points and super parameters to parameterize the highly heterogeneous log-transmissivity field can be significant. For the test case much effort is put into studying how the calibrated model’s ability to make accurate predictions depends on parameterization specifications. It is shown that there exists no unique parameterization specification that produces the smallest possible prediction error variance for all eight studied predictions simultaneously. However, a reasonable compromise of parameterization can be made. It is further shown that it is possible to choose parameterization specifications that result in error variances for some predictions that are greater than those that would be encountered if the model had not been calibrated at all. Test case predictions that have this “problem” are all dependent on the field conditions near an inflow boundary where data is lacking and which exhibit apparent significant nonlinear behavior. It is shown that inclusion of Tikhonov regularization can stabilize and speed up the parameter estimation process. A method of linearized model analysis of predictive uncertainty and of prediction error variance is described. The test case demonstrates that linearized model analysis can be used prior to groundwater model calibration to determine a parameterization specification that produces (close to) minimum possible error variance for predictions that do not behave like seriously nonlinear functions. Recommendations concerning the use of pilot points and singular value decomposition in real-world groundwater model calibration are finally given. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
12. The cost of uniqueness in groundwater model calibration
- Author
-
Moore, Catherine and Doherty, John
- Subjects
- *
GROUNDWATER , *CALIBRATION , *HYDROGEOLOGY , *INVERSION (Geophysics) - Abstract
Abstract: Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The “cost of uniqueness” is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, in turn, can lead to erroneous predictions made by a model that is ostensibly “well calibrated”. Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration parameter covariance matrices shows that the latter often possess a much smaller spectral bandwidth than the former. It is also demonstrated that, as an inevitable consequence of the fact that a calibrated model cannot replicate every detail of the true system, model-to-measurement residuals can show a high degree of spatial correlation, a fact which must be taken into account when assessing these residuals either qualitatively, or quantitatively in the exploration of model predictive uncertainty. These principles are demonstrated using a synthetic case in which spatial parameter definition is based on pilot points, and calibration is implemented using both zones of piecewise constancy and constrained minimization regularization. [Copyright &y& Elsevier]
- Published
- 2006
- Full Text
- View/download PDF
13. Non-uniqueness of inverse transmissivity field calibration and predictive transport modeling
- Author
-
McKenna, Sean A., Doherty, John, and Hart, David B.
- Subjects
- *
GROUNDWATER flow , *STOCHASTIC analysis , *ALGORITHMS , *HYDRODYNAMICS - Abstract
Recent work with stochastic inverse modeling techniques has led to the development of efficient algorithms for the construction of transmissivity (
T ) fields conditioned to measurements ofT and head. Small numbers of calibration targets and correlation between model parameters in these inverse solutions can lead to a relatively large region in parameter space that will produce a near optimal calibration of theT field to measured heads. Most applications of these inverse techniques have not considered the effects of non-unique calibration on subsequent predictions made with theT fields. Use of theseT fields in predictive contaminant transport modeling must take into account the non-uniqueness of theT field calibration. A recently developed ‘predictive estimation’ technique is presented and employed to createT fields that are conditioned to observed heads and measuredT values while maximizing the conservatism of the associated predicted advective travel time. Predictive estimation employs confidence and prediction intervals calculated simultaneously on the flow and transport models, respectively. In an example problem, the distribution of advective transport results created with the predictive estimation technique is compared to the distribution of results created under traditionalT field optimization where model non-uniqueness is not considered. The predictive estimation technique produces results with significantly shorter travel times relative to traditional techniques while maintaining near optimal calibration. Additionally, predictive estimation produces more accurate estimates of the fastest travel times. [Copyright &y& Elsevier]- Published
- 2003
- Full Text
- View/download PDF
14. Foreword: Understanding through Modeling.
- Author
-
Zheng, Chunmiao, Poeter, Eileen, Hill, Mary, and Doherty, John
- Subjects
GROUNDWATER ,SYSTEMS design ,CONFERENCES & conventions ,COMPREHENSION ,MODELING (Sculpture) ,SIMULATION methods & models ,TECHNOLOGICAL complexity - Abstract
The article sheds light on ground water and the study of the concept through modeling. MODFLOW-themed conferences held since 1998, visualized different modeling approaches and simulation programs. The programs comprise of simulation of coupled ground water/surface water systems, variable-density flow and solute transport modeling, unsaturated-zone flow simulation, parameter estimation, and ground water management optimization modeling.
- Published
- 2006
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.