226 results
Search Results
2. Mathematical Geosciences Best Paper Award 2021
- Author
-
Dimitrakopoulos, Roussos, primary
- Published
- 2022
- Full Text
- View/download PDF
3. Best Paper Award 2020
- Published
- 2021
- Full Text
- View/download PDF
4. Best Paper Award 2019
- Published
- 2020
- Full Text
- View/download PDF
5. Best Paper Award 2018
- Published
- 2019
- Full Text
- View/download PDF
6. Mathematical Geosciences Best Paper Award 2016
- Published
- 2017
- Full Text
- View/download PDF
7. Best Paper Award 2014
- Published
- 2016
- Full Text
- View/download PDF
8. Best Paper Award 2013
- Published
- 2014
- Full Text
- View/download PDF
9. Best Paper Award 2010
- Published
- 2011
- Full Text
- View/download PDF
10. 2009 Best Paper Award
- Published
- 2010
- Full Text
- View/download PDF
11. 2008 Best Paper Award
- Published
- 2009
- Full Text
- View/download PDF
12. 2007 Best Paper Award for Mathematical Geosciences
- Published
- 2008
- Full Text
- View/download PDF
13. Best Paper Award 2006
- Published
- 2007
- Full Text
- View/download PDF
14. MWD Data-Based Marble Quality Class Prediction Models Using ML Algorithms
- Author
-
Ozge Akyildiz, Hakan Basarir, Veena Sajith Vezhapparambu, and Steinar Ellefmo
- Subjects
Mathematics (miscellaneous) ,General Earth and Planetary Sciences - Abstract
Brønnøy Kalk AS operates an open pit mine in Norway producing marble, mainly used by the paper industry. The final product is used as filler and pigment for paper production. Therefore, the quality of the product has utmost importance. In the mine, the primary quality indicator, called TAPPI, is quantified through a laborious sampling process and laboratory experiments. As a part of digital transformation, measurement while drilling (MWD) data have been collected in the mine. The purpose of this paper is to use the recorded MWD data for the prediction of marble quality to facilitate quality blending in the pit. For this purpose, two supervised classification modelling algorithms such as conventional logistic regression and random forest have been employed. The results show that the random forest classification model presents significantly higher statistical performance, and it can be used as a tool for fast and efficient marble quality assessment.
- Published
- 2023
- Full Text
- View/download PDF
15. Conditional Simulation for Mineral Resource Classification and Mining Dilution Assessment from the Early 1990s to Now
- Author
-
Harry M. Parker and Georges Verly
- Subjects
Mineral resource estimation ,Computer science ,0208 environmental biotechnology ,Change of support ,Nonparametric statistics ,02 engineering and technology ,Geostatistics ,010502 geochemistry & geophysics ,01 natural sciences ,Conditional simulation ,Mineral resource classification ,020801 environmental engineering ,Dozen ,Mathematics (miscellaneous) ,Econometrics ,General Earth and Planetary Sciences ,0105 earth and related environmental sciences - Abstract
Andre Journel joined Stanford University in 1978, and his program grew quickly to include a dozen students from the USA, Canada, Europe, and South Africa. He was instrumental in organizing the Second International Geostatistical Conference (Tahoe ’83), during which 13 papers were presented that can be linked to his group. Out of these 13 papers, 9 were mining-related, with 7 on recoverable reserves, 2 on uncertainty, 2 on conditional simulation, and 3 on nonparametric geostatistics. A significant research effort at the time was therefore directed at change of support, global and local recoveries, and uncertainty, but future trends could also be identified, such as nonparametric geostatistics and conditional simulation. This paper is a practical review of conditional simulation as a tool to improve mineral resource estimation in the areas of uncertainty, classification, and mining selectivity or dilution, based on the authors’ experience. Some practical considerations for conditional simulation are briefly discussed. Four case studies from the early 1990s to the late 2010s are presented to illustrate some solutions and challenges encountered when dealing with real-world commercial projects.
- Published
- 2021
- Full Text
- View/download PDF
16. A New Type of Conditioning of Stationary Fields and Its Application to the Spectral Simulation Approach in Geostatistics
- Author
-
Niyaz Ismagilov, Mikhail Lifshits, and A. A. Yakovlev
- Subjects
Computer science ,0208 environmental biotechnology ,02 engineering and technology ,Distinctive feature ,Geostatistics ,Type (model theory) ,010502 geochemistry & geophysics ,01 natural sciences ,020801 environmental engineering ,Mathematics (miscellaneous) ,Dimension (vector space) ,Kriging ,Stochastic simulation ,General Earth and Planetary Sciences ,Applied mathematics ,Linear combination ,Spectral method ,0105 earth and related environmental sciences - Abstract
This paper considers the problem of conditional stochastic simulation in geostatistics. A new type of conditioning method that generalizes the well-known two-step conditioning approach is introduced. The distinctive feature of the new method is that it solves the modeling problem for multiple stochastic fields in a general setup. A generalized kriging procedure is developed in the paper, using linear combinations of the simulated fields as input data instead of the commonly used separate fields’ values. Although the new conditioning method was developed initially in the framework of a particular approach to geostatistical simulation (the spectral method), it can be applied in many more general settings of conditional simulation of stationary stochastic fields of arbitrary dimension. The workings of the method and its applicability are illustrated with the results of several numerical experiments, including simulation on real oil field data.
- Published
- 2020
- Full Text
- View/download PDF
17. Uncertainty Integration in Dynamic Mining Reserves
- Author
-
Amílcar Soares, Cristina da Paixão Araújo, and João Neves
- Subjects
Mine planning ,Mathematical optimization ,Hydrogeology ,Computer science ,0208 environmental biotechnology ,Scheduling (production processes) ,Probability density function ,02 engineering and technology ,010502 geochemistry & geophysics ,Mixture model ,01 natural sciences ,020801 environmental engineering ,Mathematics (miscellaneous) ,Stochastic simulation ,General Earth and Planetary Sciences ,Block model ,0105 earth and related environmental sciences ,Quantile - Abstract
In a mining routine, the uncertainty assessment of resources can be evaluated through stochastic simulation methods that allow the characterisation of local probability density functions on point or block support and, consequently, the spatial uncertainty of grades per ore type. After the characterisation of mean grades and uncertainty per block, the main role of mine planning consists of characterising the time scheduling of reserves to arrive at a mining sequence. This paper seeks to transform the estimated block grades and uncertainty into the temporal production flow of average grades and consequent temporal uncertainty. The most straightforward approach consists of calculating the mining sequence for each of the simulated realisations of blocks, followed by accessing the uncertainty of each period in each sequence, and finally opting for the optimal sequence, according to an objective function. However, this approach needs to retain the N simulated models and calculate the mining sequence for each one, which can be a cumbersome task, particularly if the dimension of the block model is high. Hence, this paper proposes two methods: one using a Gaussian mixture model, and the other using a quantile interpolation mixture model to aggregate each period’s block production forecast by converting each block’s static uncertainty into production dynamic uncertainty. This allows for the period production uncertainty to be used as an optimisation parameter in mine planning routines. A test on the Neves Corvo mine synthetic case study is presented, demonstrating the applicability of this method in the context of an internal blending strategy or a selective mining approach.
- Published
- 2020
- Full Text
- View/download PDF
18. Construction of Deep Thermal Models Based on Integrated Thermal Properties Used for Geothermal Risk Management
- Author
-
Hejuan Liu and Mather Nasr
- Subjects
Hydrogeology ,business.industry ,Geothermal energy ,0208 environmental biotechnology ,Well logging ,Inverse transform sampling ,Soil science ,02 engineering and technology ,010502 geochemistry & geophysics ,01 natural sciences ,Physics::Geophysics ,020801 environmental engineering ,Mathematics (miscellaneous) ,Thermal conductivity ,Heat flux ,Thermal ,General Earth and Planetary Sciences ,business ,Geothermal gradient ,Geology ,0105 earth and related environmental sciences - Abstract
The development of deep geothermal energy may be at risk from the economic point of view if the estimated deep thermal field is far from its real state. The strong heterogeneity of geological units makes it challenging to perform a reliable estimation on the construction of the deep thermal field over a large region. Additionally, the thermal properties of rocks, such as thermal conductivity and the radiogenic element concentration, whether they are from laboratory measurements or inversed from well logs, may strongly control the deep thermal field at a local scale. In this paper, the thermal conductivities of rocks from the Trois-Rivieres region in the Saint Lawrence Lowlands sedimentary basin in eastern Canada are obtained from two methods: (i) direct experimental measurement and (ii) indirect inversion method using well logs, including gamma ray, neutron porosity, density, and photoelectric absorption factor. The spatial distribution of subsurface temperature in the study area in the Trois-Rivieres region is numerically investigated by considering four case studies that include different values (minimum, average, and maximum) of the thermal properties by applying the Underworld simulator. The results show that thermal properties play a large role in controlling the subsurface temperature distribution and heat flux. The temperature difference can reach 15 °C in the basement, caused by the difference in thermal properties in the Trois-Rivieres region. The highest heat flux is found in the Trenton–Black River–Chazy groups, and the lowest heat flux is in the Potsdam group, which also has the highest thermal conductivity. Vertical heat flux does not change linearly with depth but is highly related to the thermal properties of specific geological formations. Furthermore, it does not have a positive correlation with the vertical temperature changes. This demonstrates that the assessment of the potential of deep geothermal energy depending merely on the surface heat flux may greatly overestimate or underestimate the geothermal capacity. Construction of the thermal models based on the integrated thermal properties from both the experimental measurement and well logs in this paper is useful in reducing the exploration risk associated with the utilization of deep geothermal energy.
- Published
- 2019
- Full Text
- View/download PDF
19. Assessment of Geothermal Resources in Petroliferous Basins in China
- Author
-
Bincheng Guo, Junwen Hu, Shejiao Wang, Jiahong Yan, Lufeng Zhan, Feng Li, Ningsheng Chen, and Qi Tang
- Subjects
geography ,Hydrogeology ,geography.geographical_feature_category ,business.industry ,Geothermal energy ,0208 environmental biotechnology ,Geochemistry ,Aquifer ,02 engineering and technology ,Structural basin ,010502 geochemistry & geophysics ,01 natural sciences ,Well drilling ,020801 environmental engineering ,Mathematics (miscellaneous) ,General Earth and Planetary Sciences ,Coal ,business ,Geothermal gradient ,Thermal energy ,Geology ,0105 earth and related environmental sciences - Abstract
In order to speed up the development and utilization of hydrothermal energy, it is essential to assess the potential of geothermal resources in petroliferous basins. In this paper, the distribution of reservoirs (aquifers) and the characteristics of geothermal fields have been studied systematically based on geological, geophysical, well drilling, temperature, and sample test data obtained from the major petroliferous basins of China. It has been found that some of the porous sandstone formations in these petroliferous basins are major geothermal reservoirs and are extensively thick and widely distributed. In general, the geothermal gradient in China is higher in the eastern basins and lower in the western basins. On average, the geothermal gradient is above 30 °C/km in the Bohai Bay, Songliao, and Subei basins. The geothermal resource abundance is also higher in eastern China and in the Beibuwan basin in southern China, and the geothermal source forming condition is better, followed by the Ordos, Qaidam, and Sichuan basins in Central China. Other potential basins include the Tarim and Junggar basins in western China, where the geothermal gradient ranges between 21 and 22 °C/km, on average. In this paper, three methods, stochastic simulation, unit volumetric, and analogy, were used for the assessment of geothermal resources. Using the stochastic simulation and unit volumetric methods, the geothermal resources, annual recovered geothermal resources, geothermal water resources, and thermal energy of water in 11 basins or blocks of up to 4000 m deep were calculated. Grading evaluation criteria were established by considering the heterogeneity of geothermal reservoirs. The results showed that the petroliferous basins are very rich in geothermal resources. The annual recoverable resources reach 1626.8 × 106 tons of standard coal, in which grade I, grade II, and grade III resources are 641.9 × 106, 298.6 × 106, and 686.3 × 106 tons of standard coal, respectively. The results demonstrate that the development and utilization of geothermal energy in oilfields has a huge potential for industrial production and family use, and a great significance for the development of green oilfields. With the high demand of heat, the eastern oilfields with high geothermal resource abundance should be the first to be considered for the production and utilization of geothermal energy, followed by the central and western oilfields.
- Published
- 2019
- Full Text
- View/download PDF
20. Economic Impacts of the Geothermal Industry in Beijing, China: An Input–Output Approach
- Author
-
Yalin Lei, Jing Liu, and Yong Jiang
- Subjects
Consumption (economics) ,business.industry ,Natural resource economics ,Geothermal energy ,0208 environmental biotechnology ,02 engineering and technology ,010502 geochemistry & geophysics ,01 natural sciences ,020801 environmental engineering ,Mathematics (miscellaneous) ,Beijing ,Greenhouse gas ,General Earth and Planetary Sciences ,Production (economics) ,Environmental science ,Economic impact analysis ,Electricity ,business ,Geothermal gradient ,0105 earth and related environmental sciences - Abstract
Geothermal energy is a clean energy source that can potentially mitigate greenhouse gas emissions, as its use can lead to a lower mitigation cost. However, research on the economic impacts of the geothermal industry is scarce. This paper describes the effect of the geothermal industry, its economic input and output, using Beijing as a case study. This paper adopts the input–output model. The results show that the demand for and input use of the geothermal sector vary greatly across industrial sectors: electricity, heat production, the supply industry and general equipment manufacturing have the greatest direct consumption coefficient for the geothermal industry. When considering direct and indirect demand, it is clear that the geothermal industry has a great effect on different industrial sectors in diverse ways. Its influence coefficient and sensitivity coefficient are 1.2167 (ranked 11th) and 1.2293 (ranked 8th), respectively, revealing that it exerts obvious demand-pulling and supply-pushing effects on the regional economy.
- Published
- 2019
- Full Text
- View/download PDF
21. Inference of Global Reservoir Connectivity from Static Pressure Data with Fast Coarse-Scale Simulation Models
- Author
-
Michael J. King, Behnam Jafarpour, and Morteza Khodabakhshi
- Subjects
geography ,geography.geographical_feature_category ,Hydrogeology ,Delaunay triangulation ,Computer science ,0208 environmental biotechnology ,Simulation modeling ,Inference ,Aquifer ,02 engineering and technology ,Static pressure ,010502 geochemistry & geophysics ,01 natural sciences ,020801 environmental engineering ,Mathematics (miscellaneous) ,Calibration ,General Earth and Planetary Sciences ,Ensemble Kalman filter ,Algorithm ,0105 earth and related environmental sciences - Abstract
Characterization of field-scale reservoir connectivity is critical for production optimization and field development planning. The information content of the data corresponding to different physical processes, coupled with the discrepancy in production data and geologic model resolutions, complicate the estimation of global field connectivity. Due to its diffusive nature, pressure varies smoothly in the reservoir, a characteristic that can be adequately captured using coarse-scale description of the connectivity in the reservoir model. On the other hand, flowrate and watercut information are relatively more localized and require higher-resolution models to adequately describe their variations. Hence, pressure and flowrate data carry information pertinent to their respective scales. In this paper, a fast workflow is presented for estimating large-scale reservoir connectivity from static reservoir pressure measurements. For this purpose, a coarse-scale grid system with Delaunay triangulation is developed to represent the global reservoir connectivity. Flow-based upscaling is applied to create the initial coarse-scale static simulation models from geological or simulation models. The ensemble Kalman filter is then applied to calibrate the initial models against static pressure data and to identify field-scale connectivity parameters that control the global pressure distribution, such as aquifer strength, continuity/discontinuity in reservoir properties, and fault transmissibilities. This approach enables fast characterization of field-scale reservoir connectivity from pressure data with a coarser description of the reservoir connectivity, which offers a parameterization level that is commensurate with the resolution and information content of the static pressure measurements. The proposed calibration approach can serve as preconditioning to generate initial reservoir models with consistent global connectivity patterns for full-scale history matching. To illustrate the feasibility and performance of the method, examples are drawn from real field cases in which static pressure data are integrated to infer global reservoir connectivity maps. In this paper, the resulting connectivity maps are used to obtain facies probability maps to generate initial facies models for full-scale history matching.
- Published
- 2018
- Full Text
- View/download PDF
22. Uncertainty Quantification in Reservoir Prediction: Part 1—Model Realism in History Matching Using Geological Prior Definitions
- Author
-
Temistocles Simon Rojas, Vasily Demyanov, Michael Andrew Christie, and Daniel Arnold
- Subjects
Computer science ,0208 environmental biotechnology ,Bayesian probability ,Inference ,Context (language use) ,02 engineering and technology ,010502 geochemistry & geophysics ,computer.software_genre ,01 natural sciences ,020801 environmental engineering ,Support vector machine ,Mathematics (miscellaneous) ,Robustness (computer science) ,Multilayer perceptron ,Prior probability ,General Earth and Planetary Sciences ,Data mining ,Uncertainty quantification ,computer ,0105 earth and related environmental sciences - Abstract
Bayesian uncertainty quantification of reservoir prediction is a significant area of ongoing research, with the major effort focussed on estimating the likelihood. However, the prior definition, which is equally as important in the Bayesian context and is related to the uncertainty in reservoir model description, has received less attention. This paper discusses methods for incorporating the prior definition into assisted history-matching workflows and demonstrates the impact of non-geologically plausible prior definitions on the posterior inference. This is the first of two papers to deal with the importance of an appropriate prior definition of the model parameter space, and it covers the key issue in updating the geological model—how to preserve geological realism in models that are produced by a geostatistical algorithm rather than manually by a geologist. To preserve realism, geologically consistent priors need to be included in the history-matching workflows, therefore the technical challenge lies in defining the space of all possibilities according to the current state of knowledge. This paper describes several workflows for Bayesian uncertainty quantification that build realistic prior descriptions of geological parameters for history matching using support vector regression and support vector classification. In the examples presented, it is used to build a prior description of channel dimensions, which is then used to history-match the parameters of both fluvial and deep-water reservoir geostatistical models. This paper also demonstrates how to handle modelling approaches where geological parameters and geostatistical reservoir model parameters are not the same, such as measured channel dimensions versus affinity parameter ranges of a multi-point statistics model. This can be solved using a multilayer perceptron technique to move from one parameter space to another and maintain realism. The overall workflow was implemented on three case studies, which refer to different depositional environments and geological modelling techniques, and demonstrated the ability to reduce the volume of parameter space, thereby increasing the history-matching efficiency and robustness of the quantified uncertainty.
- Published
- 2018
- Full Text
- View/download PDF
23. Geostatistics for Compositional Data: An Overview
- Author
-
K. Gerald van den Boogaart, Ute Mueller, and Raimon Tolosana-Delgado
- Subjects
Multivariate statistics ,Computer science ,0208 environmental biotechnology ,Multivariate normal distribution ,02 engineering and technology ,Geostatistics ,010502 geochemistry & geophysics ,computer.software_genre ,Mathematical proof ,01 natural sciences ,020801 environmental engineering ,Set (abstract data type) ,Mathematics (miscellaneous) ,Transformation (function) ,General Earth and Planetary Sciences ,Data mining ,Compositional data ,Variogram ,computer ,0105 earth and related environmental sciences - Abstract
This paper presents an overview of results for the geostatistical analysis of collocated multivariate data sets, whose variables form a composition, where the components represent the relative importance of the parts forming a whole. Such data sets occur most often in mining, hydrogeochemistry and soil science, but the results gathered here are relevant for any regionalised compositional data set. The paper covers the basic definitions, the analysis of the spatial codependence between components, mapping methods of cokriging and cosimulation honoring compositional constraints, the role of pre- and post-transformations such as log-ratios or multivariate normal score transforms, and block-support upscaling. The main result is that multivariate geostatistical techniques can and should be performed on log-ratio scores, in which case the system data-variograms-cokriging/cosimulation is intrinsically consistent, delivering the same results regardless of which log-ratio transformation was used to represent them. Proofs of all statements are included in an appendix.
- Published
- 2018
- Full Text
- View/download PDF
24. Enhanced Multiple-Point Statistical Simulation with Backtracking, Forward Checking and Conflict-Directed Backjumping
- Author
-
Mohammad Shahraeeni
- Subjects
Optimization problem ,Pixel ,Backtracking ,Computer science ,0208 environmental biotechnology ,02 engineering and technology ,010502 geochemistry & geophysics ,01 natural sciences ,020801 environmental engineering ,Image (mathematics) ,Mathematics (miscellaneous) ,Simple (abstract algebra) ,Backjumping ,General Earth and Planetary Sciences ,Look-ahead ,Algorithm ,0105 earth and related environmental sciences ,Communication channel - Abstract
During a conventional multiple-point statistics simulation, the algorithm may not find a matched neighborhood in the training image for some unsimulated pixels. These pixels are referred to as the dead-end pixels; the existence of the dead-end pixels means that multiple-point statistics simulation is not a simple sequential simulation. In this paper, the multiple-point statistics simulation is cast as a combinatorial optimization problem, and the efficient backtracking algorithm is developed to solve this optimization problem. The efficient backtracking consists of backtracking, forward checking, and conflict-directed backjumping algorithms that are introduced and discussed in this paper. This algorithm is applied to simulate multiple-point statistics properties of some synthetic training images; the results show that no anomalies occurred in any of the produced realizations as opposed to previously published methods for solving the dead-end pixels. In particular, in simulating a channel system, all the channels generated by this method are continuous, which is of paramount importance in fluid flow simulation applications. The results also show that the presence of hard data does not degrade the quality of the generated realizations. The presented method provides a robust algorithmic framework for performing MPS simulation.
- Published
- 2018
- Full Text
- View/download PDF
25. Fuzzy Clustering with Spatial Correction and Its Application to Geometallurgical Domaining
- Author
-
Chong-Yu Xu, Peter A. Dowd, and E. Sepulveda
- Subjects
Fuzzy clustering ,Computer science ,02 engineering and technology ,010502 geochemistry & geophysics ,computer.software_genre ,01 natural sciences ,Fuzzy logic ,020501 mining & metallurgy ,Mathematics (miscellaneous) ,0205 materials engineering ,Convergence (routing) ,Cluster (physics) ,General Earth and Planetary Sciences ,Graph (abstract data type) ,Spatial variability ,Data mining ,Cluster analysis ,computer ,0105 earth and related environmental sciences ,Geometallurgy - Abstract
This paper describes a proposed method for clustering attributes on the basis of their spatial variability and the uncertainty of cluster membership. The method is applied to geometallurgical domaining in mining applications. The main objective of geometallurgical clustering is to ensure consistent feed to a processing plant by minimising transitions between different types of feed coming from different domains (clusters). For this purpose, clusters should contain not only similar geometallurgical characteristics but also be located in as few contiguous and compact spatial locations as possible so as to maximise the homogeneity of ore delivered to the plant. Most existing clustering methods applied to geometallurgy have two problems. Firstly, they are unable to differentiate subsets of attributes at the cluster level and therefore cluster membership can only be assigned on the basis of exactly identical attributes, which may not be the case in practice. Secondly, as they do not take account of the spatial relationships they can produce clusters which may be spatially dispersed and/or overlapped. In the work described in this paper a new clustering method is introduced that integrates three distinct steps to ensure quality clustering. In the first step, fuzzy membership information is used to minimise compactness and maximise separation. In the second step, the best subsets of attributes are defined and applied for domaining purposes. These two steps are iterated to convergence. In the final step a graph-based labelling method, which takes spatial constraints into account, is used to produce the final clusters. Three examples are presented to illustrate the application of the proposed method. These examples demonstrate that the proposed method can reveal useful relationships among geometallurgical attributes within a clear and compact spatial structure. The resulting clusters can be used directly in mine planning to optimise the ore feed to be delivered to the processing plant.
- Published
- 2018
- Full Text
- View/download PDF
26. Analysis of a GPS Network Based on Functional Data Analysis
- Author
-
Fernando Fernández-Palacín, Sonia Pérez-Plaza, Belén Rosado, Manuel Berrocoso, and Raúl Páez Jiménez
- Subjects
010504 meteorology & atmospheric sciences ,Computer science ,business.industry ,Geodetic datum ,Functional data analysis ,Kalman filter ,010502 geochemistry & geophysics ,Missing data ,Geodesy ,01 natural sciences ,Mathematics (miscellaneous) ,Assisted GPS ,Principal component analysis ,Global Positioning System ,General Earth and Planetary Sciences ,business ,Smoothing ,0105 earth and related environmental sciences - Abstract
This paper demonstrates the usefulness of approaching the dynamic study of the precise positioning of a network of permanent global positioning system (GPS) stations through functional data analysis. The displacement data for each GPS station, obtained from observations of the global navigation satellite system, are a discrete sample of the positioning curve. The aim of this paper is to reconstruct the original functions in order to use them as functional data. In the method presented in this paper, the geodetic series are obtained first by processing the GPS data with respect to a reference station. Second, for each station, a cleaning process is applied to eliminate the values considered as outliers, and the missing values are imputed by using a Kalman filter. Finally, the original functions are reconstructed by using smoothing techniques and by evaluating several bases of functions. Moreover, these functions are treated with statistical techniques for functional data. This procedure is applied to the permanent stations of the south of the Iberian peninsula and the north of Africa (SPINA) network. The topocentric series: east, north and up are analysed. In the analysis of the positioning curves, there is observed a synchronized behaviour of the functions in those periods of time with important seismic activity. This behaviour also appears in the analysis of the second principal component of the East and up dimensions. Furthermore, the first two principal components of the East coordinate enable us to make a classification of the stations in the SPINA network. The classification made is consistent with the previous knowledge of the tectonic plates in the studied area.
- Published
- 2018
- Full Text
- View/download PDF
27. A Fast Approximation for Seismic Inverse Modeling: Adaptive Spatial Resampling
- Author
-
Gregoire Mariethoz, Tapan Mukerji, and Cheolkyun Jeong
- Subjects
Mathematical optimization ,010504 meteorology & atmospheric sciences ,Rejection sampling ,Nonuniform sampling ,Slice sampling ,Markov chain Monte Carlo ,Multiple-try Metropolis ,010502 geochemistry & geophysics ,01 natural sciences ,symbols.namesake ,Mathematics (miscellaneous) ,Metropolis–Hastings algorithm ,Resampling ,symbols ,General Earth and Planetary Sciences ,Algorithm ,Importance sampling ,0105 earth and related environmental sciences ,Mathematics - Abstract
Seismic inverse modeling, which transforms appropriately processed geophysical data into the physical properties of the Earth, is an essential process for reservoir characterization. This paper proposes a work flow based on a Markov chain Monte Carlo method consistent with geology, well-logs, seismic data, and rock-physics information. It uses direct sampling as a multiple-point geostatistical method for generating realizations from the prior distribution, and Metropolis sampling with adaptive spatial resampling to perform an approximate sampling from the posterior distribution, conditioned to the geophysical data. Because it can assess important uncertainties, sampling is a more general approach than just finding the most likely model. However, since rejection sampling requires a large number of evaluations for generating the posterior distribution, it is inefficient and not suitable for reservoir modeling. Metropolis sampling is able to perform an equivalent sampling by forming a Markov chain. The iterative spatial resampling algorithm perturbs realizations of a spatially dependent variable, while preserving its spatial structure by conditioning to subset points. However, in most practical applications, when the subset conditioning points are selected at random, it can get stuck for a very long time in a non-optimal local minimum. In this paper it is demonstrated that adaptive subset sampling improves the efficiency of iterative spatial resampling. Depending on the acceptance/rejection criteria, it is possible to obtain a chain of geostatistical realizations aimed at characterizing the posterior distribution with Metropolis sampling. The validity and applicability of the proposed method are illustrated by results for seismic lithofacies inversion on the Stanford VI synthetic test sets.
- Published
- 2017
- Full Text
- View/download PDF
28. Linear Autonomous Compartmental Models as Continuous-Time Markov Chains: Transit-Time and Age Distributions
- Author
-
Carlos A. Sierra and Holger Metzler
- Subjects
Exponential distribution ,Steady state ,010504 meteorology & atmospheric sciences ,Markov chain ,Probability density function ,04 agricultural and veterinary sciences ,01 natural sciences ,Nonlinear system ,Mathematics (miscellaneous) ,Control theory ,040103 agronomy & agriculture ,0401 agriculture, forestry, and fisheries ,General Earth and Planetary Sciences ,Applied mathematics ,Probability distribution ,Phase-type distribution ,Renewal theory ,0105 earth and related environmental sciences ,Mathematics - Abstract
Linear compartmental models are commonly used in different areas of science, particularly in modeling the cycles of carbon and other biogeochemical elements. The representation of these models as linear autonomous compartmental systems allows different model structures and parameterizations to be compared. In particular, measures such as system age and transit time are useful model diagnostics. However, compact mathematical expressions describing their probability distributions remain to be derived. This paper transfers the theory of open linear autonomous compartmental systems to the theory of absorbing continuous-time Markov chains and concludes that the underlying structure of all open linear autonomous compartmental systems is the phase-type distribution. This probability distribution generalizes the exponential distribution from its application to one-compartment systems to multiple-compartment systems. Furthermore, this paper shows that important system diagnostics have natural probabilistic counterparts. For example, in steady state the system’s transit time coincides with the absorption time of a related Markov chain, whereas the system age and compartment ages correspond with backward recurrence times of an appropriate renewal process. These relations yield simple explicit formulas for the system diagnostics that are applied to one linear and one nonlinear carbon-cycle model in steady state. Earlier results for transit-time and system-age densities of simple systems are found to be special cases of probability density functions of phase-type. The new explicit formulas make costly long-term simulations to obtain and analyze the age structure of open linear autonomous compartmental systems in steady state unnecessary.
- Published
- 2017
- Full Text
- View/download PDF
29. Creaming and the Likelihood of Discovering Additional Giant Petroleum Fields
- Author
-
Jostein Lillestøl and Richard Sinding-Larsen
- Subjects
education.field_of_study ,Hydrogeology ,Computer science ,Population ,Sampling (statistics) ,Inference ,Context (language use) ,010502 geochemistry & geophysics ,01 natural sciences ,010104 statistics & probability ,Creaming ,chemistry.chemical_compound ,Mathematics (miscellaneous) ,chemistry ,Statistics ,Log-normal distribution ,Econometrics ,General Earth and Planetary Sciences ,Petroleum ,0101 mathematics ,education ,0105 earth and related environmental sciences - Abstract
The applied context of this paper is the exploration for petroleum resources, like petroleum accumulations in different deposits, where the most promising deposits are likely to be drilled first, based on some size indicators, so-called creaming. The paper explores creaming models in the context of sampling with probabilities in proportion to size, for which a lognormal size distribution has nice analytical features. It departs from the traditional paradigm in petroleum resource assessment. Instead of conceiving a finite population being depleted over time in a decaying fashion with respect to size, the situation is studied within the framework of independent observations (infinite population) and an exploration maturity-dependent creaming factor. The theoretical and practical consequences for inference on the parent population and the probabilities and expectations linked to future discoveries are studied. The theory applies to the issue of remaining sizes of petroleum resources to be found within different future discovery horizons on the mature part of the Norwegian Continental Shelf. The aim is to obtain reasonable and useful predictions, and not to provide the best possible explanation of the exploratory behavior itself.
- Published
- 2016
- Full Text
- View/download PDF
30. Investigation on Principal Component Analysis Parameterizations for History Matching Channelized Facies Models with Ensemble-Based Data Assimilation
- Author
-
Alexandre A. Emerick
- Subjects
Hydrogeology ,business.industry ,Covariance matrix ,Computer science ,Channelized ,Pattern recognition ,010103 numerical & computational mathematics ,Covariance ,010502 geochemistry & geophysics ,01 natural sciences ,Kernel principal component analysis ,Physics::Geophysics ,Mathematics (miscellaneous) ,Data assimilation ,Statistics ,Principal component analysis ,Facies ,General Earth and Planetary Sciences ,Artificial intelligence ,0101 mathematics ,business ,0105 earth and related environmental sciences - Abstract
Preserving plausible geological features when updating facies models is still one of the main challenges with ensemble-based history matching. This is particularly evident for fields with complex geological description (e.g., fluvial channels). There is an impressive amount of research published in the last few years about this subject. However, it appears that there is no definitive solution and both, academia and industry, are looking for practical and robust methods. Among the parameterizations traditionally investigated for history matching, the principal component analysis (PCA) of the prior covariance matrix is an efficient alternative to represent models described by two-point statistics. However, there are some recent developments extending PCA-based parameterizations for models described by multiple-point statistics. The first part of this paper presents an investigation on PCA-based schemes for parameterizing channelized facies models for history matching with ensemble-based methods. The following parameterizations are tested: standard PCA, two alternative implementations of kernel PCA and optimization-based PCA. In the second part of the paper, the optimization-based PCA is modified to allow the use of covariance localization and adapted for simultaneously adjusting the facies type and the permeability values within each facies when history matching production data with an ensemble-based method.
- Published
- 2016
- Full Text
- View/download PDF
31. On the Initial Stages of the Densification and Lithification of Sediments
- Author
-
C. M. Sands and H.W. Chandler
- Subjects
Dilatant ,Hydrogeology ,Consolidation (soil) ,Yield surface ,0211 other engineering and technologies ,02 engineering and technology ,Mechanics ,010502 geochemistry & geophysics ,01 natural sciences ,Mathematics (miscellaneous) ,Compressive strength ,Shear (geology) ,General Earth and Planetary Sciences ,Grain damage ,Geotechnical engineering ,Lithification ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences - Abstract
This paper presents a model that can simulate early rock-forming processes, including the influence of the initial packing of the grains on the subsequent rearrangement that occurs as a consequence of pressure-induced grain damage. The paper is concerned with the behaviour of assemblies of loose grains and the mechanics of early lithification. Consider the concept of shear-induced negative dilatancy, where any shear deformation has a tendency to produce densification even at very low pressures. As shear deformation progresses, positive dilatancy starts to contribute and at the critical state the two effects balance. This concept is encapsulated within the mathematics of the model. The model building scheme is first outlined and demonstrated using a hard particle model. Then, the concept of ‘self cancelling shear deformations’ that contribute to the shear–volume coupling but not to the macroscopic shear deformation is explained. The structure of the hard particle model is modified to include low levels of damage at the grain contacts. A parameter that describes bonding between the grains and possible damage to those bonds is incorporated into a term that, depending on its magnitude, also accounts for frictional resistance between unbonded grains. This parameter has the potential to develop with time, increasing compressive stress, or in response to evolving chemical concentrations. Together these modifications allow densification in the short term, and the formation of sedimentary rocks in the long term, by pressure alone, to be simulated. Finally, simulations using the model are compared with experimental results on soils.
- Published
- 2015
- Full Text
- View/download PDF
32. Use of Gestalt Theory and Random Sets for Automatic Detection of Linear Geological Features
- Author
-
Dafni Sidiropoulou Velidou, Valentyn A. Tolpekin, Tsehaie Woldai, Alfred Stein, Department of Earth Observation Science, and Department of Earth Systems Analysis
- Subjects
Calibration (statistics) ,Orientation (computer vision) ,Computer science ,business.industry ,Earth and Planetary Sciences(all) ,Pattern recognition ,Scale factor ,Constant false alarm rate ,Mathematics (miscellaneous) ,Line segment ,METIS-310846 ,Position (vector) ,Canny edge detector ,General Earth and Planetary Sciences ,Gestalt psychology ,Artificial intelligence ,business ,Remote sensing - Abstract
This paper presents the calibration and application of a Gestalt-based line segment method for automatic geological lineament detection from remote sensing images. This method involves estimation of the scale factor, the angle tolerance and a threshold on the false alarm rate. It identifies major lineaments as objects characterized by two edges on the image, which appear as transitions from dark to bright and vice versa. These objects were modelled as random sets with parameters drawn from their distributions. Following the geometry of detected segments, a novel validation method assesses the accuracy with respect to a linear vector reference. The methodology was applied to a study area in Kenya where lineaments are prominent in the landscape and are well identifiable from an ASTER image. Error rates were based on distance and local orientation, and the study showed that the existence and size of the objects were sensitive to parameter variation. False detection rate and missing detection rate were both equal to 0.50, which is better than values equal to 0.65 and 0.63, observed using the Canny edge detection. Modelling the uncertainty of geological lineaments with random sets further showed that no core set is formed, indicating that there is an inherent uncertainty in their existence and position, and that the variance is relatively high. Comparing the test area with four areas in the same region showed similar results. Despite some shortcomings in identifying full lineaments from partially observed lineaments, it is concluded that the procedure in this paper is well able to automatically extract lineaments from a remote sensing image and validate their existence.
- Published
- 2015
- Full Text
- View/download PDF
33. shinyNORRRM: A Cross-Platform Software to Calculate the CIPW Norm
- Author
-
Reneé González-Guzmán, Luis Alejandro Elizondo-Pacheco, Abraham González-Roque, Carlos Eduardo Sánchez-Torres, and Kevin Samuel Cárdenas-Muñoz
- Subjects
Mathematics (miscellaneous) ,General Earth and Planetary Sciences - Abstract
In this paper, a novelty-free software to assess an efficient CIPW Norm (± 0.006 wt.% in differences between input and output data) is presented. The package is available in the official repository for user-contributed R packages (CRAN: Comprehensive R Archive Network). The software is able to handle big data sets and considers minor and trace element compositions. The algorithm can calculate odd minerals in igneous rocks, such as cancrinite and calcite, adjust the Fe+3/Fe+2 ratio in different standard approaches, and recalculate the compositions of the rocks in an anhydrous basis (100 ± 0.003 wt.% volatile-free adjusted). Furthermore, the package calculates several petrological parameters, and the graphical outputs are displayed following IUGS scheme standards. The prime aspect of shinyNORRRM is the symbiosis of native R functions with the R package’s shiny (Web Application Framework for R) to run the norm in a user-friendly interface. shinyNORRRM can be executed in any operating system and requires no previous programming knowledge, thus promising to be the universal computational program in this matter. The output data are printed in the standard comma-separated values (*.csv) format, which is highly compatible with general spreadsheet editors. In this work, the algorithm of our program is validated using already compiled whole-rock geochemical databases.
- Published
- 2023
- Full Text
- View/download PDF
34. Periodic Void Formation in Chevron Folds
- Author
-
Giles W Hunt and Timothy Dodwell
- Subjects
Nonlinear system ,Void (astronomy) ,Mathematics (miscellaneous) ,Materials science ,Hinge ,Free boundary problem ,General Earth and Planetary Sciences ,Geotechnical engineering ,Geometry ,Overburden pressure ,Potential energy ,Saddle ,Strain energy - Abstract
An energy-based model is developed to describe the periodic formation of voids/saddle reefs in hinge zones of chevron folds. Such patterns have been observed in a series of experiments on layers of paper, as well as in the field. A simplified hinge region in a stack of elastic layers, with straight limbs connected by convex segments, is constructed so that a void forms every $$m$$ layers and repeats periodically. Energy contributions include strain energy of bending and work done both against a confining overburden pressure and an axial compressive load. The resulting total potential energy functional for the system is minimised subject to the constraint of non-interpenetration of layers, leading to representation as a nonlinear second-order free boundary problem. Numerical solutions demonstrate that there can exist a minimum-energy $$m$$ -periodic solution with $$m \ne 1$$ . The model shows good agreement when compared with experiments on layers of paper.
- Published
- 2014
- Full Text
- View/download PDF
35. Christopher D. Lloyd: Exploring Spatial Scale in Geography
- Author
-
Jennifer L. Dungan
- Subjects
Mathematics (miscellaneous) ,Phenomenon ,media_common.quotation_subject ,Contiguity ,Scale (chemistry) ,General Earth and Planetary Sciences ,Context (language use) ,Ambiguity ,Resolution (logic) ,Linguistics ,Coherence (linguistics) ,media_common ,Term (time) - Abstract
Over a decade ago, colleagues and I published a paper that attempted to clarify the meanings of the word “scale” (Dungan et al. 2002). We recommended that the context of “scale” should always be specified according to whether it pertained to (1) a process (or phenomenon), (2) observations (or measurements), or (3) analysis. Our paper, not for the first time and certainly not for the last, pointed out that scale has numerous meanings, some of which are unrelated and some of which are similar, or even contradictory. We suggested that the term should be avoided in favor of more specific terms (such as cartographic ratio, resolution, extent, support, range, grain, etc.) in order to reduce ambiguity. Professor Christopher Lloyd’s book, Exploring Spatial Scale in Geography, is about a diverse collection of scale-related topics. The introduction states that the focus of the book is on scale as “the size or extent of a process”, aligning with context (1) above. However, Professor Lloyd uses the term scale in almost all the other senses as well throughout the 253 pages of this slim volume, without distinguishing contexts (2) and (3). The book’s ten chapters touch on most aspects of spatial scale and illustrate them using case studies from human and physical geography. Professor Lloyd has aimed at a “stand-alone introduction” to the analysis of spatial scale. Unfortunately, the text does not live up to this promise. Though the survey-like nature of the material could be suitable for an introduction and the case studies are typically quite straightforward, there is a lack of coherence or depth-of-explanation that would really help geography students without a solid background in statistics. In numerous instances, terms are brought up without preamble or definition, such as cartograms (page 23), queen’s case contiguity (page 35), two-point patterns (page 92), multi-collinearity (page 114) and cross-validation (page 117).
- Published
- 2015
- Full Text
- View/download PDF
36. Comparing Training-Image Based Algorithms Using an Analysis of Distance
- Author
-
Xiaojin Tan, Jef Caers, and Pejman Tahmasebi
- Subjects
Mathematics (miscellaneous) ,Simulation algorithm ,Order (exchange) ,Simple (abstract algebra) ,Visual comparison ,Training (meteorology) ,General Earth and Planetary Sciences ,Variogram ,Measure (mathematics) ,Algorithm ,Image based ,Mathematics - Abstract
As additional multiple-point statistical (MPS) algorithms are developed, there is an increased need for scientific ways for comparison beyond the usual visual comparison or simple metrics, such as connectivity measures. In this paper, we start from the general observation that any (not just MPS) geostatistical simulation algorithm represents two types of variability: (1) the within-realization variability, namely, that realizations reproduce a spatial continuity model (variogram, Boolean, or training-image based), (2) the between-realization variability representing a model of spatial uncertainty. In this paper, it is argued that any comparison of algorithms needs, at a minimum, to be based on these two randomizations. In fact, for certain MPS algorithms, it is illustrated with different examples that there is often a trade-off: Increased pattern reproduction entails reduced spatial uncertainty. In this paper, the subjective choice that the best algorithm maximizes pattern reproduction is made while at the same time maximizes spatial uncertainty. The discussion is also limited to fairly standard multiple-point algorithms and that our method does not necessarily apply to more recent or possibly future developments. In order to render these fundamental principles quantitative, this paper relies on a distance-based measure for both within-realization variability (pattern reproduction) and between-realization variability (spatial uncertainty). It is illustrated in this paper that this method is efficient and effective for two-dimensional, three-dimensional, continuous, and discrete training images.
- Published
- 2013
- Full Text
- View/download PDF
37. Multiscale Parameterization of Petrophysical Properties for Efficient History-Matching
- Author
-
Erwan Gloaguen, Caroline Gardet, and Mickaele Le Ravalec
- Subjects
Random field ,Hydrogeology ,Computer science ,Dynamic data ,Gaussian ,Geostatistics ,Reservoir simulation ,symbols.namesake ,Mathematics (miscellaneous) ,Data assimilation ,Stochastic simulation ,Econometrics ,symbols ,General Earth and Planetary Sciences ,Algorithm - Abstract
The prediction of fluid flows within hydrocarbon reservoirs requires the characterization of petrophysical properties. Such characterization is performed on the basis of geostatistics and history-matching; in short, a reservoir model is first randomly drawn, and then sequentially adjusted until it reproduces the available dynamic data. Two main concerns typical of the problem under consideration are the heterogeneity of rocks occurring at all scales and the use of data of distinct resolution levels. Therefore, referring to sequential Gaussian simulation, this paper proposes a new stochastic simulation method able to handle several scales for both continuous or discrete random fields. This method adds flexibility to history-matching as it boils down to the multiscale parameterization of reservoir models. In other words, reservoir models can be updated at either coarse or fine scales, or both. Parameterization adapts to the available data; the coarser the scale targeted, the smaller the number of unknown parameters, and the more efficient the history-matching process. This paper focuses on the use of variational optimization techniques driven by the gradual deformation method to vary reservoir models. Other data assimilation methods and perturbation processes could have been envisioned as well. Last, a numerical application case is presented in order to highlight the advantages of the proposed method for conditioning permeability models to dynamic data. For simplicity, we focus on two-scale processes. The coarse scale describes the variations in the trend while the fine scale characterizes local variations around the trend. The relationships between data resolution and parameterization are investigated.
- Published
- 2013
- Full Text
- View/download PDF
38. Conditioning Surface-Based Geological Models to Well and Thickness Data
- Author
-
Gregoire Mariethoz, Tao Sun, Hongmei Li, Antoine Bertoncello, and Jef Caers
- Subjects
Surface (mathematics) ,Mathematics (miscellaneous) ,Optimization problem ,Hydrogeology ,General Earth and Planetary Sciences ,Geotechnical engineering ,Geostatistics ,Stratigraphy (archaeology) ,Variogram ,Erosion (morphology) ,Algorithm ,Geology ,Event (probability theory) - Abstract
Geostatistical simulation methods aim to represent spatial uncertainty through realizations that reflect a certain geological concept by means of a spa- tial continuity model. Most common spatial continuity models are either variogram, training image, or Boolean based. In this paper, a more recent spatial model of ge- ological continuity is developed, termed the event, or surface-based model, which is specifically applicable to modeling cases with complex stratigraphy, such as in sedimentary systems. These methods rely on a rule-based stacking of events, which are mathematically represented by two-dimensional thickness variations over the do- main, where positive thickness is associated with deposition and negative thickness with erosion. Although it has been demonstrated that the surface-based models ac- curately represent the geological variation present in complex layered systems, they are more difficult to constrain to hard and soft data as is typically required of prac- tical geostatistical techniques. In this paper, we develop a practical methodology for constraining such models to hard data from wells and thickness data interpreted from geophysics, such as seismic data. Our iterative methodology relies on a decomposi- tion of the parameter optimization problem into smaller, manageable problems that
- Published
- 2013
- Full Text
- View/download PDF
39. Direct Pattern-Based Simulation of Non-stationary Geostatistical Models
- Author
-
Mehrdad Honarkhah and Jef Caers
- Subjects
Grid ,computer.software_genre ,Modeling and simulation ,Mathematics (miscellaneous) ,Component (UML) ,Pattern recognition (psychology) ,Stochastic simulation ,Econometrics ,General Earth and Planetary Sciences ,Spatial variability ,Data mining ,Spatial analysis ,computer ,Realization (probability) ,Mathematics - Abstract
Non-stationary models often capture better spatial variation of real world spatial phenomena than stationary ones. However, the construction of such models can be tedious as it requires modeling both statistical trend and stationary stochastic component. Non-stationary models are an important issue in the recent development of multiple-point geostatistical models. This new modeling paradigm, with its reliance on the training image as the source for spatial statistics or patterns, has had considerable practical appeal. However, the role and construction of the training image in the non-stationary case remains a problematic issue from both a modeling and practical point of view. In this paper, we provide an easy to use, computationally efficient methodology for creating non-stationary multiple-point geostatistical models, for both discrete and continuous variables, based on a distance-based modeling and simulation of patterns. In that regard, the paper builds on pattern-based modeling previously published by the authors, whereby a geostatistical realization is created by laying down patterns as puzzle pieces on the simulation grid, such that the simulated patterns are consistent (in terms of a similarity definition) with any previously simulated ones. In this paper we add the spatial coordinate to the pattern similarity calculation, thereby only borrowing patterns locally from the training image instead of globally. The latter would entail a stationary assumption. Two ways of adding the geographical coordinate are presented, (1) based on a functional that decreases gradually away from the location where the pattern is simulated and (2) based on an automatic segmentation of the training image into stationary regions. Using ample two-dimensional and three-dimensional case studies we study the behavior in terms of spatial and ensemble uncertainty of the generated realizations.
- Published
- 2012
- Full Text
- View/download PDF
40. Variogram Matrix Functions for Vector Random Fields with Second-Order Increments
- Author
-
Chunsheng Ma and Juan Du
- Subjects
Mathematical optimization ,Random field ,Covariance matrix ,Multivariate random variable ,Positive-definite matrix ,Gaussian random field ,Matrix (mathematics) ,Mathematics (miscellaneous) ,Matrix function ,Statistics::Methodology ,General Earth and Planetary Sciences ,Applied mathematics ,Variogram ,Mathematics - Abstract
The variogram matrix function is an important measure for the dependence of a vector random field with second-order increments, and is a useful tool for linear predication or cokriging. This paper proposes an efficient approach to construct variogram matrix functions, based on three ingredients: a univariate variogram, a conditionally negative definite matrix, and a Bernstein function, and derives three classes of variogram matrix functions for vector elliptically contoured random fields. Moreover, various dependence structures among components can be derived through appropriate mixture procedures demonstrated in this paper. We also obtain covariance matrix functions for second-order vector random fields through the Schoenberg–Levy kernels.
- Published
- 2011
- Full Text
- View/download PDF
41. Moving Surface Spline Interpolation Based on Green’s Function
- Author
-
Xingsheng Deng and Zhong-an Tang
- Subjects
Mathematical analysis ,MathematicsofComputing_NUMERICALANALYSIS ,Trilinear interpolation ,Bilinear interpolation ,Stairstep interpolation ,Linear interpolation ,Multivariate interpolation ,Mathematics (miscellaneous) ,Nearest-neighbor interpolation ,General Earth and Planetary Sciences ,Spline interpolation ,Algorithm ,ComputingMethodologies_COMPUTERGRAPHICS ,Interpolation ,Mathematics - Abstract
Some commonly used interpolation algorithms are analyzed briefly in this paper. Among all of the methods, biharmonic spline interpolation, which is based on Green’s function and proposed by Sandwell, has become the mainstream method for its high precision, simplicity and flexibility. However, the minimum curvature method has two flaws. First, it suffers from undesirable oscillations between data points, which is solved by interpolation with splines in tension. Second, the computation time is approximately proportional to the cube of the number of data constraints, making the method slow for situations with dense data coverage. Focusing on the second problem, this paper introduces the moving surface spline interpolation method based on Green’s function, and the interpolation error equations are deduced. Because the proposed method only chooses the nearest data points by using the merge sort algorithm for interpolating, the computation time is greatly decreased. The optimal number of the nearest points can be determined by using the interpolation error estimation equation. No matter how many data points there are, this method can be implemented without difficulty. Examples show that the proposed method can obtain high interpolation precision and high computation speed at the same time.
- Published
- 2011
- Full Text
- View/download PDF
42. Validation Techniques for Geological Patterns Simulations Based on Variogram and Multiple-Point Statistics
- Author
-
S. De Iaco, Sabrina Maggio, DE IACO, Sandra, and Maggio, Sabrina
- Subjects
Snesim ,Similarity (geometry) ,Computer science ,Curvilinear structure ,High order cumulants ,Geostatistics ,Geologic map ,Spatial distribution ,Multiple-point statistic ,Mathematics (miscellaneous) ,Kriging ,Statistics ,General Earth and Planetary Sciences ,Validation method ,Variogram ,Training image ,Point (geometry) ,Linear least squares - Abstract
Traditional simulation methods that are based on some form of kriging are not sensitive to the presence of strings of connectivity of low or high values. They are particularly inappropriate in many earth sciences applications, where the geological structures to be simulated are curvilinear. In such cases, techniques allowing the reproduction of multiple-point statistics are required. The aim of this paper is to point out the advantages of integrating such multiple-statistics in a model in order to allow shape reproduction, as well as heterogeneity structures, of complex geological patterns to emerge. A comparison between a traditional variogram-based simulation algorithm, such as the sequential indicator simulation, and a multiple-point statistics algorithm (e.g., the single normal equation simulation) is presented. In particular, it is shown that the spatial distribution of limestone with meandering channels in Lecce, Italy is better reproduced by using the latter algorithm. The strengths of this study are, first, the use of a training image that is not a fluvial system and, more importantly, the quantitative comparison between the two algorithms. The paper focuses on different metrics that facilitate the comparison of the methods used for limestone spatial distribution simulation: both objective measures of similarity of facies realizations and high-order spatial cumulants based on different third- and fourth-order spatial templates are considered.
- Published
- 2011
- Full Text
- View/download PDF
43. Subdivide and Conquer: Adapting Non-Manifold Subdivision Surfaces to Surface-Based Representation and Reconstruction of Complex Geological Structures
- Author
-
Mohammad Moulaeifard, Florian Wellmann, Simon Bernard, Miguel de la Varga, and David Bommes
- Subjects
Mathematics (miscellaneous) ,General Earth and Planetary Sciences - Abstract
Methods from the field of computer graphics are the foundation for the representation of geological structures in the form of geological models. However, as many of these methods have been developed for other types of applications, some of the requirements for the representation of geological features may not be considered, and the capacities and limitations of different algorithms are not always evident. In this work, we therefore review surface-based geological modelling methods from both a geological and computer graphics perspective. Specifically, we investigate the use of NURBS (non-uniform rational B-splines) and subdivision surfaces, as two main parametric surface-based modelling methods, and compare the strengths and weaknesses of the two approaches. Although NURBS surfaces have been used in geological modelling, subdivision surfaces as a standard method in the animation and gaming industries have so far received little attention—even if subdivision surfaces support arbitrary topologies and watertight boundary representation, two aspects that make them an appealing choice for complex geological modelling. It is worth mentioning that watertight models are an important basis for subsequent process simulations. Many complex geological structures require a combination of smooth and sharp edges. Investigating subdivision schemes with semi-sharp creases is therefore an important part of this paper, as semi-sharp creases characterise the resistance of a mesh structure to the subdivision procedure. Moreover, non-manifold topologies, as a challenging concept in complex geological and reservoir modelling, are explored, and the subdivision surface method, which is compatible with non-manifold topology, is described. Finally, solving inverse problems by fitting the smooth surfaces to complex geological structures is investigated with a case study. The fitted surfaces are watertight, controllable with control points, and topologically similar to the main geological structure. Also, the fitted model can reduce the cost of modelling and simulation by using a reduced number of vertices in comparison with the complex geological structure. Graphical Abstract
- Published
- 2022
- Full Text
- View/download PDF
44. Hybrid Iterative Ensemble Smoother for History Matching of Hierarchical Models
- Author
-
Dean Oliver
- Subjects
Mathematics (miscellaneous) ,General Earth and Planetary Sciences - Abstract
The choice of a prior model can have a large impact on the ability to assimilate data. In standard applications of ensemble-based data assimilation, all realizations in the initial ensemble are generated from the same covariance matrix with the implicit assumption that this covariance is appropriate for the problem. In a hierarchical approach, the parameters of the covariance function, for example, the variance, the orientation of the anisotropy and the ranges in two principal directions, may all be uncertain. Thus, the hierarchical approach is much more robust against model misspecification. In this paper, three approaches to sampling from the posterior for hierarchical parameterizations are discussed: an optimization-based sampling approach (randomized maximum likelihood, RML), an iterative ensemble smoother (IES), and a novel hybrid of the previous two approaches (hybrid IES). The three approximate sampling methods are applied to a linear-Gaussian inverse problem for which it is possible to compare results with an exact “marginal-then-conditional” approach. Additionally, the IES and the hybrid IES methods are tested on a two-dimensional flow problem with uncertain anisotropy in the prior covariance. The standard IES method is shown to perform poorly in the flow examples because of the poor representation of the local sensitivity matrix by the ensemble-based method. The hybrid method, however, samples well even with a relatively small ensemble size.
- Published
- 2022
- Full Text
- View/download PDF
45. Fast Summation of Functions on the Rotation Group
- Author
-
Antje Vollrath, Ralf Hielscher, and Jiirgen Prestin
- Subjects
Diffraction ,Mathematical optimization ,Mathematics (miscellaneous) ,Scattering ,Kernel (statistics) ,Fast Fourier transform ,Kernel density estimation ,General Earth and Planetary Sciences ,Applied mathematics ,Linear combination ,Rotation (mathematics) ,Rotation group SO ,Mathematics - Abstract
Computing with functions on the rotation group is a task carried out in various areas of application. When it comes to approximation, kernel based methods are a suitable tool to handle these functions. In this paper, we present an algorithm which allows us to evaluate linear combinations of functions on the rotation group as well as a truly fast algorithm to sum up radial functions on the rotation group. These approaches based on nonequispaced FFTs on SO(3) take $\mathcal{O}(M+N)$ arithmetic operations for M and N arbitrarily distributed source and target nodes, respectively. In this paper, we investigate a selection of radial functions and give explicit theoretical error bounds, as well as numerical examples of approximation errors. Moreover, we provide an application of our method, namely the kernel density estimation from electron back scattering diffraction (EBSD) data, a problem relevant in texture analysis.
- Published
- 2010
- Full Text
- View/download PDF
46. Applicability of Statistical Learning Algorithms for Spatial Variability of Rock Depth
- Author
-
Pijush Samui and T. G. Sitharam
- Subjects
Computer science ,Online machine learning ,Statistical model ,Function (mathematics) ,computer.software_genre ,Regression ,Relevance vector machine ,Support vector machine ,Mathematics (miscellaneous) ,Statistical learning theory ,General Earth and Planetary Sciences ,Spatial variability ,Data mining ,computer ,Algorithm - Abstract
Two algorithms are outlined, each of which has interesting features for modeling of spatial variability of rock depth. In this paper, reduced level of rock at Bangalore, India, is arrived from the 652 boreholes data in the area covering 220 sq⋅km. Support vector machine (SVM) and relevance vector machine (RVM) have been utilized to predict the reduced level of rock in the subsurface of Bangalore and to study the spatial variability of the rock depth. The support vector machine (SVM) that is firmly based on the theory of statistical learning theory uses regression technique by introducing e-insensitive loss function has been adopted. RVM is a probabilistic model similar to the widespread SVM, but where the training takes place in a Bayesian framework. Prediction results show the ability of learning machine to build accurate models for spatial variability of rock depth with strong predictive capabilities. The paper also highlights the capability of RVM over the SVM model.
- Published
- 2010
- Full Text
- View/download PDF
47. Compressed History Matching: Exploiting Transform-Domain Sparsity for Regularization of Nonlinear Dynamic Data Integration Problems
- Author
-
Dennis McLaughlin, Behnam Jafarpour, Vivek K Goyal, and William T. Freeman
- Subjects
Mathematical optimization ,Mathematics (miscellaneous) ,Compressed sensing ,Basis (linear algebra) ,Signal reconstruction ,Discrete cosine transform ,General Earth and Planetary Sciences ,Basis pursuit ,Sparse approximation ,Inverse problem ,Regularization (mathematics) ,Algorithm ,Mathematics - Abstract
In this paper, we present a new approach for estimating spatially-distributed reservoir properties from scattered nonlinear dynamic well measurements by promoting sparsity in an appropriate transform domain where the unknown properties are believed to have a sparse approximation. The method is inspired by recent advances in sparse signal reconstruction that is formalized under the celebrated compressed sensing paradigm. Here, we use a truncated low-frequency discrete cosine transform (DCT) is redundant to approximate the spatial parameters with a sparse set of coefficients that are identified and estimated using available observations while imposing sparsity on the solution. The intrinsic continuity in geological features lends itself to sparse representations using selected low frequency DCT basis elements. By recasting the inversion in the DCT domain, the problem is transformed into identification of significant basis elements and estimation of the values of their corresponding coefficients. To find these significant DCT coefficients, a relatively large number of DCT basis vectors (without any preferred orientation) are initially included in the approximation. Available measurements are combined with a sparsity-promoting penalty on the DCT coefficients to identify coefficients with significant contribution and eliminate the insignificant ones. Specifically, minimization of a least-squares objective function augmented by an l 1-norm of DCT coefficients is used to implement this scheme. The sparsity regularization approach using the l 1-norm minimization leads to a better-posed inverse problem that improves the non-uniqueness of the history matching solutions and promotes solutions that are, according to the prior belief, sparse in the transform domain. The approach is related to basis pursuit (BP) and least absolute selection and shrinkage operator (LASSO) methods, and it extends the application of compressed sensing to inverse modeling with nonlinear dynamic observations. While the method appears to be generally applicable for solving dynamic inverse problems involving spatially-distributed parameters with sparse representation in any linear complementary basis, in this paper its suitability is demonstrated using low frequency DCT basis and synthetic waterflooding experiments.
- Published
- 2009
- Full Text
- View/download PDF
48. Surface-Based 3D Modeling of Geological Structures
- Author
-
P. Collon-Drouaillet, Sophie Viseur, Judith Sausse, C. Le Carlier de Veslud, Guillaume Caumon, Centre de Recherches Pétrographiques et Géochimiques (CRPG), Institut national des sciences de l'Univers (INSU - CNRS)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS), Géosciences Rennes (GR), Université de Rennes 1 (UR1), Université de Rennes (UNIV-RENNES)-Université de Rennes (UNIV-RENNES)-Institut national des sciences de l'Univers (INSU - CNRS)-Centre Armoricain de Recherches en Environnement-Centre National de la Recherche Scientifique (CNRS), Géologie des Systèmes Carbonatés (FRE 2761 ), Université de Provence - Aix-Marseille 1-Centre National de la Recherche Scientifique (CNRS), Géologie et gestion des ressources minérales et énergétiques (G2R), Université Henri Poincaré - Nancy 1 (UHP)-Institut National Polytechnique de Lorraine (INPL)-Centre de recherches sur la géologie des matières premières minérales et énergétiques (CREGU)-Centre National de la Recherche Scientifique (CNRS), and Université de Rennes (UR)-Institut national des sciences de l'Univers (INSU - CNRS)-Centre Armoricain de Recherches en Environnement-Centre National de la Recherche Scientifique (CNRS)
- Subjects
Geomodeling ,010504 meteorology & atmospheric sciences ,Computer science ,media_common.quotation_subject ,3D earth modeling ,[SDU.STU]Sciences of the Universe [physics]/Earth Sciences ,010502 geochemistry & geophysics ,01 natural sciences ,Field (computer science) ,Construction engineering ,Task (project management) ,Mathematics (miscellaneous) ,Software ,Quality (business) ,Visualization ,0105 earth and related environmental sciences ,media_common ,Class (computer programming) ,business.industry ,Structural geology ,Interpretation ,3D modeling ,[INFO.INFO-MO]Computer Science [cs]/Modeling and Simulation ,Workflow ,General Earth and Planetary Sciences ,business - Abstract
International audience; Building a 3D geological model from field and subsurface data is a typical task in geological studies involving natural resource evaluation and hazard assessment. However, there is quite often a gap between research papers presenting case studies or specific innovations in 3D modeling and the objectives of a typical class in 3D structural modeling, as more and more is implemented at universities. In this paper, we present general procedures and guidelines to effectively build a structural model made of faults and horizons from typical sparse data. Then we describe a typical 3D structural modeling workflow based on triangulated surfaces. Our goal is not to replace software user guides, but to provide key concepts, principles, and procedures to be applied during geomodeling tasks, with a specific focus on quality control.
- Published
- 2009
- Full Text
- View/download PDF
49. Enriching Representation and Enhancing Nearest Neighbor Classification of Slope/Landslide Data Using Rectified Feature Line Segments and Hypersphere-Based Scaling: A Reproducible Experimental Comparison
- Author
-
Y. M. Ospina-Dávila and Mauricio Orozco-Alzate
- Subjects
Mathematics (miscellaneous) ,General Earth and Planetary Sciences - Abstract
Measuring geotechnical and natural hazard engineering features, along with pattern recognition algorithms, allows us to categorize the stability of slopes into two main classes of interest: stable or at risk of collapse. The problem of slope stability can be further generalized to that of assessing landslide susceptibility. Many different methods have been applied to these problems, ranging from simple to complex, and often with a scarcity of available data. Simple classification methods are preferred for the sake of both parsimony and interpretability, as well as to avoid drawbacks such as overtraining. In this paper, an experimental comparison was carried out for three simple but powerful existing variants of the well-known nearest neighbor rule for classifying slope/landslide data. One of the variants enhances the representational capacity of the data using so-called feature line segments, while all three consider the concept of a territorial hypersphere per prototype feature point. Additionally, this experimental comparison is entirely reproducible, as Python implementations are provided for all the methods and the main simulation, and the experiments are performed using three publicly available datasets: two related to slope stability and one for landslide susceptibility. Results show that the three variants are very competitive and easily applicable.
- Published
- 2023
- Full Text
- View/download PDF
50. Simpson’s Paradox in Natural Resource Evaluation
- Author
-
Y. Zee Ma
- Subjects
Contingency table ,Modifiable areal unit problem ,Mathematics (miscellaneous) ,Interpretation (philosophy) ,Econometrics ,General Earth and Planetary Sciences ,Inference ,Sampling (statistics) ,Sociology ,Spatial analysis ,Reference class problem ,Simpson's paradox - Abstract
Reversals of statistical relationships, when two or more groups of data in a cross tabulation are aggregated, were first revealed more than a century ago. The reversal was later named Simpson’s paradox after his reversal examples in a seminal paper drew the attention of the statistical community. However, almost all the published cases have been in sociology and biomedical statistics. Does Simpson’s reversal occur in geosciences? Various examples from petroleum geology and reservoir modeling will be shown in this paper. Boundary conditions for such a reversal will be discussed under a broader framework of sampling analysis. Ecological inference bias, change of support problem, modifiable areal unit problem, and reference class problem will be discussed in relation to the Simpson’s paradox in the framework of spatial statistics. It will be demonstrated that the traditional interpretation of the paradox as a result of disproportional sampling based on a contingency table is not always true in the framework of spatial statistics, and the reversal while theoretically benign is inferentially treacherous. Therefore, emphasis will be on the discussion of combining statistical and scientific inferences in geologic modeling and hydrocarbon resource evaluation under various sampling schemes or support effect with or without a Simpson’s reversal.
- Published
- 2008
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.