224 results
Search Results
2. Best students papers in Vienna showcase young talent coming through
- Published
- 2016
- Full Text
- View/download PDF
3. Young geoscientists show the way forward with style in student paper competition
- Author
-
Födisch, H., primary, Hincapie, R., additional, Wegner, J., additional, Ganzer, L., additional, Mascolo, V., additional, Rusciadelli, G., additional, Lecomte, I., additional, Roja Moraleda, L.A., additional, Escalona, A., additional, Schulte, L., additional, and Abdullah Sayghe, S., additional
- Published
- 2015
- Full Text
- View/download PDF
4. Enhanced oil recovery for heavy oil in Canada
- Author
-
Majid Nasehi
- Subjects
Geophysics ,Enhanced oil recovery ,Pulp and paper industry ,Geology - Published
- 2021
5. AVO, near and far - the end of the trend?
- Author
-
J.A. de Bruin
- Subjects
medicine.medical_specialty ,Lithology ,010502 geochemistry & geophysics ,01 natural sciences ,Metamorphic petrology ,Theory based ,Paleontology ,Geophysics ,Stratigraphy ,Telmatology ,medicine ,Pore fluid ,Geology ,0105 earth and related environmental sciences - Abstract
This paper gives a brief overview of the history of the well-known and famous ‘background trend’ in AVO theory, which also goes by the names ‘lithology trend’, ‘fluid line’ and sometimes ‘noise trend’. Whereas the discovery of the trend met with euphoria, stirring hopes that it would become much easier to detect hydrocarbons, this promise has hardly been fulfilled. One of the problems was (and is) that the trend is much steeper than what theory based on rock properties predicts. This paper explains why and how the trend can be so steep. It explains how a clear trend emerges regardless of lithology, hydrocarbons or even noise. Any contribution from lithology or pore fluid, if it exists, will be smaller and often completely drowned by what I call the ‘transformation trend’.
- Published
- 2019
6. Seismic wavefield divergence at the free surface
- Author
-
Everhard Muyzert, Nihed El Allouche, Ed Kragh, Nicolas Goujon, and Pascal Edme
- Subjects
Regional geology ,Geophysics ,Hydrogeology ,Pressure measurement ,Hydrophone ,law ,Noise (signal processing) ,Acoustics ,Free surface ,Engineering geology ,Divergence (statistics) ,Geology ,law.invention - Abstract
This paper explores the concept of pressure measurements in a land seismic acquisition setting. We first review the theory for pressure measurements near the surface of the Earth and show the significance of the S-to-P conversion which results in the pressure being proportional to the sum of the slowness-scaled horizontal velocity fields. In the second part of this paper we test a land hydrophone prototype via a small-scale experiment to validate the pseudopressure measurement. Potential applications are briefly discussed. This study suggests that such a land hydrophone can potentially allow local and omni-directional noise attenuation, via adaptive subtraction using the pseudo-pressure data as noise model, and therefore can potentially allow sparser acquisition geometries with associated field effort and cost reduction.
- Published
- 2018
7. Advances in electromagnetic techniques for exploration, prospecting, and monitoring of hydrocarbon deposits
- Author
-
Viacheslav V. Spichak
- Subjects
Regional geology ,Geophysics ,Exploration geophysics ,Engineering geology ,Prospecting ,Economic geology ,Igneous petrology ,Geology ,Environmental geology ,Geobiology - Abstract
Traditional exploration and prospecting for hydrocarbons (HC) has traditionally been carried out using seismic techniques. At the same time, it is well known that seismic techniques are inefficient in the presence of high-velocity layers (which reduce resolution at great depth), igneous rocks, thrusts within the crystalline basement, and tight limestone. Being sensitive to geological structure, seismic techniques are characterized by low resolution at the level of micro-parameters such as fluid type, porosity/ fracture, and degree of pores HC-saturation. Moreover, technical complications, e.g., highly rugged topography, dense vegetation and object remoteness may make seismic surveys difficult, expensive, or even impossible. Therefore, non-seismic methods are increasingly used in HC exploration and prospecting. In particular, electrical and electromagnetic (EM) methods (magnetotelluric sounding, direct current, time-domain EM (TDEM), induced polarization (IP), controlled source EM, etc.) complement seismic techniques and increasingly replace them (Johansen, 2008; Key, 2012; Zhang et al., 2014; Barsukov and Fainberg, 2015; Berdichevsky et al., 2015, among others). In parallel with EM, the efficient technologies for 3D modelling and inversion (see, for instance, a review paper by Siripunvaraporn (2012) and references therein) and integrated analysis of EM and other geophysical data (see, for instance, a review paper by Bedrosian (2007) has been created. Application of these methods in solving problems of exploration geophysics enabled progression in exploration, prospecting, and devel¬opment of HC deposits (see, for instance, a review paper by Strack (2014) and references therein). Meanwhile, very recent advances in indirect estimating of rock geophysical properties of lithologic reservoirs from electromagnetic sounding data (Spichak and Goidina, 2016; Spichak and Zakharova, 2015, 2016; Spichak, 2017) open up new possibilities related to development of more sophisticated approaches to estimation of the reservoir properties and its potential assessment. The purpose of this paper is to demonstrate the advanced capabilities of electromagnetic techniques in solving a wide range of prob-lems, especially those which seismic surveys are not effective in solving.
- Published
- 2018
8. Ranking velocity models for depth conversion, closure confidence and volumetrics
- Author
-
Nick Crabtree
- Subjects
010504 meteorology & atmospheric sciences ,Closure (topology) ,Context (language use) ,010502 geochemistry & geophysics ,01 natural sciences ,Field (geography) ,Depth conversion ,Geophysics ,Ranking ,Depth map ,Point (geometry) ,Algorithm ,Geomorphology ,Geology ,0105 earth and related environmental sciences ,Environmental geology - Abstract
This paper discusses the question of closure confidence, as it applies to the size of a field. The procedure is demonstrated with reference to a particular case study. After introducing the field in question, the size of the field on the time map is examined and compared with a couple of simple depth maps. This leads to the general questions of how to decide which of several possible structure maps is best, and what ‘best’ means in the context of a depth map of an oil field. The approach used in this paper to solve this problem is to create many thousands of depth maps, and use the technique of cross-validation to choose the best depth map(s). The variation of size and extent within these different depth maps is examined, and a statistical attempt made to determine how large the field is, and what the volume uncertainty is. It is concluded that no individual map can be regarded as ‘best’, as the map that predicts the most likely depth at each point on the oil field does not correspond to the most likely volume of oil. A second objective of this paper is to raise awareness of the public domain freeware software http://xval.sf.net that was used to generate these results.
- Published
- 2017
9. Is it worth trying to evaluate aircraft landings via passive seismic? The starting points and constraints
- Author
-
Jeena Yohannan, Radek Smeja, Jan Frantisek Kotek, and Jan Mestan
- Subjects
Truck ,Seismic vibrator ,010504 meteorology & atmospheric sciences ,Soft landing ,Hard landing ,010502 geochemistry & geophysics ,01 natural sciences ,Field (computer science) ,Vibration ,Geophysics ,Passive seismic ,Runway ,Geology ,0105 earth and related environmental sciences ,Marine engineering - Abstract
This paper discusses the difficulties related to aircraft landing evaluation as a result of pairing of data collected on board and the determination of plane-runway touches. Today’s on-board sensors provide data that is hard to connect with exact landing styles and resulting deformations. Passive seismic can provide valuable tools, because it has the ability to measure the runway vibrations that are proportional to the plane deformations. Even the style of the landing (symmetrical v. asymmetrical) can be caught by distributing more sensors along each side of the runway. Although it is an available and widely used method, it seems that passive seismic has only a small value for a plane landing evaluation today. The difficulties are resulting from the requirement on the runway homogeneity, a frequent runway calibration or noisy effects that will particularly affect the quality of soft landing seismograms. For passive seismic to become a valuable tool, five stages of its implementation are proposed. Aircraft landing is a critical part of every flight (Figure 1). A hard landing can lead to hidden damage that can put the people on board in danger in the following flights. The question is whether it would be useful to evaluate the quality of landings via runway vibrations instead of sensors on board and invest money in long-term tests. The use of seismic sensors in various branches of industry has been spreading over the past decades. They are used for measuring vibrations of bridges, highways or buildings. They are often used in vibroseis trucks for geophysical measurement. The shape of their sweep vibrations is controlled (Tellier et al., 2015). This paper deals with a reverse problem. The vibrations generated by aircraft landings cannot always be controlled. Thus the runway seismic properties have to be well known. There exist recent patents on real-time road or runway conditions monitoring (Friedlander and Kraemer, 2014; Hagelin et al., 2014). Other issues such as the use of residual vibration energy from air wake for production of electricity have been discussed (Agarwal and Ali, 2013). This paper is intended to provoke a discussion about the entry of real-time passive seismic monitoring to the field of airports.
- Published
- 2017
10. High resolution diffraction imaging for reliable interpretation of fracture systems
- Author
-
Zvi Koren, Y. Serfaty, R. Kelvin, D. Chase, B. de Ribet, and G. Yelin
- Subjects
Diffraction ,010504 meteorology & atmospheric sciences ,Engineering geology ,Seismic attribute ,Dispersive body waves ,010502 geochemistry & geophysics ,01 natural sciences ,Seismic wave ,Geophysics ,Reflection (physics) ,Specular reflection ,Focus (optics) ,Seismology ,Geology ,0105 earth and related environmental sciences - Abstract
Small-scale subsurface features, such as natural fractures, act as scattering sources for seismic waves propagating through the subsurface. The wavefield generated by those source points is identified as diffraction energy. The amplitude of this type of energy is much smaller than the recorded events reflected from actual interfaces between different geological layers. Moreover, diffraction energy is normally suppressed by conventional processing and standard imaging algorithms, where summations and averaging processes are applied. The common objective in such processing workflows is to focus on the high specular amplitudes in order to enhance the continuity of seismic reflection events for improving the structural mapping of the subsurface. Our goal is to complement the traditional seismic interpretation workflow by integrating information relative to diffraction energy as another seismic attribute to be interpreted. The technique applied in this paper is based on a depth imaging algorithm that maps and bins the recorded surface information into multi-dimensional, local angle domain (LAD) common image gathers. The advantage of this system is its unique ability to decompose the wavefield into reflection and diffraction energy directly at the image locations. This paper provides a brief overview of the technology and illustrates its benefits when applied to the Eagle Ford and Barnett shale reservoirs, where seismic data can be of moderate quality, leading to accurate, high-resolution, and highcertainty seismic interpretation for risk-managed field development.
- Published
- 2017
11. CO2 sequestration scenarios: challenges and opportunities for EM exploration techniques at Hontomín
- Author
-
Eloi Vilamajo, Fabian Bellmunt, Pilar Queralt, P. Piña, Xènia Ogaya, Alex Marcuello, Joan Campanyà, David Bosch, Juanjo Ledo, and Magdalena Escalas
- Subjects
Exploration geophysics ,Scale (chemistry) ,Engineering geology ,Earth science ,02 engineering and technology ,010502 geochemistry & geophysics ,01 natural sciences ,Field (computer science) ,Geobiology ,Geophysics ,020401 chemical engineering ,Magnetotellurics ,Systems engineering ,Electrical resistivity tomography ,0204 chemical engineering ,Geology ,0105 earth and related environmental sciences ,Environmental geology - Abstract
The geological storage of CO2 is presented as a transitory solution to reduce the human emission of CO2. To be efficient and safe, it is required to obtain a very good knowledge of the subsurface and to have tools to control the evolution of the CO2 sequestrated into reservoirs. Geophysical techniques are essential for characterizing and to monitoring the storage sites. The paper presents an overview of existing electromagnetic methods which have been conducted at the Hontomin CO2 storage site (in Spain) during the past few years. These studies cover numerical simulations and geophysical campaigns at different scales and with different electromagnetic techniques: the magnetotelluric method, control source electromagnetic and electrical resistivity tomography at lab-core scale. The objective of the paper is to offer a critical review of the EM methods in meeting challenges related of achieving high resolution for deep targets and in a relatively noisy environment. We conclude with the lessons learnt from the Hontomin case study. Research for new natural resources as well as new complex pollution problems require new efforts from scientists, in particular the environmental earth sciences community, to analyse the problems and to find appropriate solutions and tools. In this way, advances and improvements in geophysical exploration techniques depend, in some part, on the challenges of these new problems. The evolution of the electromagnetic and electric methods (EM) shows how they have broadened their application fields when these demanding tasks have been faced. Recent studies (i.e. Constable, 2010; Strack, 2014; Munoz, 2014; Streich, 2016) review the main contributions of these methods in specific fields such as geothermal and hydrocarbons exploration and monitoring and point out their potential as well as some future directionsin each field. EM methods play an important role in these fields given the resistivity dependency on temperature and/or on the presence of fluids.
- Published
- 2016
12. What is the sound of the Earth? First steps into EMusic
- Author
-
Stefano Pontani and Antonio Menghini
- Subjects
Information retrieval ,MIDI ,Point (typography) ,Exploit ,computer.file_format ,Database normalization ,Geophysics ,Sonification ,Stratigraphy (archaeology) ,Protocol (object-oriented programming) ,computer ,Seismology ,Geology ,Complement (set theory) - Abstract
We show the possibility of transforming Airborne EM (AEM) data into music, by means of the simple procedure of data normalization and the application of Musical Instrument Digital Interface (MIDI) routine. For this introductory work, named ‘EMusic’, we exploit the ability of the MIDI protocol to translate numerical values (voltage response) into musical pitches. It is possible to use the large amount of data collected by airborne systems, in order to make easier the comprehension of EM method (for a didactic purpose), to assess quickly the quality of data (for a technical purpose) and, last, but not least, to compose musical pieces (creative purpose). Through preliminary and short samples, we show that it is really possible to achieve a ‘sound’ of a particular geological setting, characterized by a specific musical signature, which could support the data interpretation. It is possible to expand greatly this procedure, also considering other geophysical methods. We point out future steps that could be taken. The idea of transforming scientific data into music (sonification) is not new: as reported by Dell’Aversana (2013), many authors dealt with this topic, mainly by processing seismic data (see further references in the quoted paper). The same author presented music samples extracted from earthquake and volcanic activity, processed through the Musical Instrument Digital Interface (MIDI) protocol. In a second paper, published in 2014, he applied this idea to seismic prospection for detecting gas-filled channels, faults and geological formations, using rhythmic features that reflect the spectral analysis of data. Finally, he suggested that this approach can be applied to any kind of geophysical data and that sonification can complement, not substitute, standard geophysical processing and interpretation routines.
- Published
- 2016
13. An unsettling science, or poor marketing?
- Author
-
Paolo Sudiro
- Subjects
media_common.quotation_subject ,Global warming ,Climate change ,Environmental ethics ,Scientific evidence ,Misconduct ,Paleontology ,Politics ,Geophysics ,Credibility ,Rhetorical question ,Prejudice ,Geology ,media_common - Abstract
T he debate on whether global warming is a threat to humankind, if human activity has been its cause, and what actions should or could be taken to confront it has exceeded the borders of academic dispute to become a public and political issue. Any action taken in response to global warming, from business as usual to drastic CO2 emissions reduction, will have severe consequences for the environment and the global economy. Therefore, appropriate political decisions should be taken based on the best scientific evidence available. Unfortunately, the debate on climate change has been hampered by systematic distortion of the available facts, and reciprocal accusations of misconduct between the polarized fields of pro-AGW adherents and AGW-denialists which have had the effect of undermining the credibility of science itself. Bob Heath’s article on climate change/global warming published in First Break November (Heath, 2015) reminded me of similar papers published a few years ago by different authors and in a different journal, strikingly similar in their approach to the issue of climate change. The papers’ audience was very similar of First Break’s public, addressed to a general audience of geologists belonging to different disciplines of the earth sciences, not necessarily experts in climate science but with an understanding of the science and its problems. Those papers were also disputing the validity of AGW, using more or less the same arguments that Heath uses, mixing together technical data to disprove AGW and rhetorical tools to discredit the scientists supporting the opposite position. I am not going to address here the validity of Heath’s claims about the reality, origin, amplitude, and impact of climate change: maybe global warming is not happening, or if it is happening it could be natural, and even if it is anthropogenic there is nothing we can do to stop it, and it may be that a warmer planet is better. Moreover, the climate change debate seems to be more a subject for anthropologists or social scientists, and individual positions on the matter appear to be mainly motivated by personal cultural prejudice, regardless of the science behind (Boykoff, 2008; Kahan et al., 2010; Lewandowsky et al., 2013a, 2013b). The bottom line is that, even using the best science available, there is no way in which a pro-AGW reader or an AGW-skeptic will change their minds.
- Published
- 2016
14. Performance analysis of forward-looking GPR ultra-wideband antennas for buried object detection
- Author
-
Brian R. Phelan, Kyle A. Gallagher, Kelly D. Sherbondy, and Ram M. Narayanan
- Subjects
Reconfigurable antenna ,Directional antenna ,Reflective array antenna ,Acoustics ,Conformal antenna ,Antenna measurement ,Mineralogy ,Slot antenna ,law.invention ,Geophysics ,Horn antenna ,law ,Antenna (radio) ,Geology - Abstract
We are currently developing a Stepped-Frequency Radar (SFR) which utilizes a custom-made uniform linear array of 16 Vivaldi notch receive antennas and two Transverse Electromagnetic (TEM) horn transmit antennas. The SFR has an operating band of 300–2000 MHz, and a minimum frequency step-size of 1 MHz. The custom-made TEM horn antennas are used for the transmission of the SFR’s ultra-wideband (UWB) spectrum. This paper discusses a comparison analysis between a commercially available UWB antenna and the currently used TEM horns. Gain, Voltage Standing Wave Ratio (VSWR), and antenna pattern measurements for each antenna are presented. The antennas were also tested for their ability to detect buried targets in a simple stepped-frequency radar system using a network analyser as a transmitter and receiver. An analysis of the gain, VSWR, beamwidth, and measured data from radar test of each antenna was performed, providing insights into each antenna’s performance on the SFR’s ability to detect buried targets. The information provided in this paper will be useful to the radar community in exploring developmental standoff detection solutions for military applications such as obscured target detection of obstacles and explosive hazards.
- Published
- 2015
15. Seismic characterization of the Middle Jurassic Hugin sandstone reservoir in the southern Norwegian North Sea with unsupervised machine learning applications for facies classification
- Author
-
J. Marfurt Kurt, Thang Ha, Ritesh Kumar Sharma, and Satinder Chopra
- Subjects
Identification (information) ,Geophysics ,Facies ,Principal component analysis ,Seismic attribute ,k-means clustering ,Unsupervised learning ,Data mining ,Cluster analysis ,computer.software_genre ,computer ,Independent component analysis ,Geology - Abstract
Summary Because they allow us to integrate the information content contained in multiple seismic attribute volumes, machine learning techniques hold significant promise in the identification and delineation of heterogeneous 3D seismic facies. However, considerable care must be taken in choosing not only the appropriate, but also in their scaling. Sometimes such exercises are carried out mechanically, resulting in compromised interpretations and discouraging results. We examine some of the more well-established unsupervised machine learning techniques such as principal component analysis (PCA) and kmeans clustering, as well as some less common clustering techniques like independent component analysis (ICA), self-organizing mapping (SOM), and generative topographic mapping (GTM) as applied to a seismic data volume from the southern Norwegian North Sea. We find that the machine learning methods can provide increased vertical and spatial resolution. However, machine learning is also good at enhancing noise and artifacts. For this reason, the interpreter needs to ensure the data are adequately conditioned, the assumptions on which some of the techniques being applied are based are met, and finally, the most appropriate technique among those discussed in this paper is utilized.
- Published
- 2021
16. An automated pipeline for first break picking and identifying geometry errors
- Author
-
Kalashnikov Nikita, Kuvaev Alexander, Semin Daniil, and Podvyaznikov Dmitry
- Subjects
First break picking ,Geophysics ,business.industry ,Deep learning ,media_common.quotation_subject ,Control (management) ,Geometry ,Quality (business) ,Artificial intelligence ,business ,Pipeline (software) ,Geology ,media_common - Abstract
This paper describes a unified automated pipeline for first break picking and acquisition geometry control with deep learning models. It consists of three global stages, which require minimal human intervention and can be fine-tuned for new surveys to further increase the quality of the result. Evaluation of this pipeline on the ongoing projects showed that it significantly sped up both tasks, while the quality of the result was on par with the traditional procedures, performed mostly manually.
- Published
- 2021
17. Variable separations for sources and streamers in marine seismic acquisition: a novel towing concept to improve survey efficiency
- Author
-
Mark Rhodes, Karim Souissi, and Geir W. Simensen
- Subjects
Variable (computer science) ,Waves and shallow water ,Geophysics ,Offset (computer science) ,Towing ,Geology ,Marine engineering - Abstract
This paper presents a novel acquisition methodology aimed at shallow targets and/or shallow water environments that require good near offset coverage and an improved crossline distance between sources and streamers. The proposed acquisition methodology features variable separations between adjacent sources and between adjacent streamers and builds on recent advances achieved by the seismic industry in towing the sources wide. This methodology results in better near offset coverage while improving survey efficiency, up to approximately 28% when compared to conventional configurations. A theoretical case study is presented to explain how to design this new towing concept and demonstrate the expected gains when compared to similar configurations.
- Published
- 2021
18. Lithology, porosity and saturation joint prediction using stochastic rock physics modelling and litho-petro-elastic inversion
- Author
-
Khushboo Havelia, Andrea Murineddu, and Surender Manral
- Subjects
Geophysics ,Lithology ,Petrophysics ,Reservoir modeling ,Mineralogy ,Inversion (meteorology) ,Saturation (chemistry) ,Porosity ,Joint (geology) ,Geology ,Amplitude versus offset - Abstract
In this paper we present the application of a single-loop litho-petro-elastic (LPE) inversion, which is a data assimilation algorithm that uses nonlinear Zoeppritz reflectivity operators with sequential filtering. It integrates rock physics models with seismic amplitude variation with offset (AVO) inversion and Bayesian inversion to define lithology, elastic, and petrophysical properties in a single loop, thus, combining several steps of the conventional reservoir characterization workflow. In the conventional multistep approach, the lithology and petrophysical properties are generated sequentially from elastic properties after AVO inversion is performed, which can add prediction uncertainty at each step and produce results that are not correlated with each other. The LPE inversion ensures that the predicted properties maintain the relationships defined by the rock physics model. The LPE inversion was tested on data from offshore Australia with three wells, for a faulted reservoir zone containing oil with a gas cap. It provided robust predictions for lithology, porosity, and water saturation, which matched acceptably at the three wells. The algorithm also accounted for subsurface uncertainties as it produced prediction probabilities of facies, porosity, and water saturation using multidimensional probability density functions. This approach can be effectively used to classify reservoir properties in a single-loop workflow.
- Published
- 2021
19. Optimal placement of a casing shoe in a challenging HP environment in the Santos Basin, Brazil: demonstrating the value of lookahead VSP methodology
- Author
-
G. Badalini, M. Galaguza, A. Sharp, and I. Troth
- Subjects
Tectonics ,Paleontology ,Geophysics ,Aptian ,Stratigraphy ,Prospectivity mapping ,Drilling ,Submarine pipeline ,Structural basin ,Petrology ,Geology ,Cretaceous - Abstract
This paper describes a look-ahead VSP operation that took place in the BM-S-52 concession (Block S-M- 508), Santos Basin, offshore Brazil in June 2009. S-M-508 is a deepwater block located on the north-western edge of the Santos Basin pre-salt province, offshore Brazil, which was awarded to partner Petrobras and BG during the 7th licensing round of 2005 (Figure 1). This paper does not address the numerous recent pre-salt Santos Basin discoveries, nor the geological results from the BM-S-52 drilling campaign. For further detail regarding the pre-salt discoveries, the interested reader is directed to the list of references at the end of this article (Carminatti et al., 2008; Gomes et al., 2008; Moreira et al., 2007). The stratigraphy in S-M-508 comprises (oldest to youngest), a locally untested, at the time of drilling, Aptian-Barremian (and older) pre-salt section, followed by an Aptian evaporitic interval, overlain by a thick post-salt section, comprising Santonian to Recent clastics. The thickness of the post-salt overburden varies greatly across the block due to irregular salt distribution, and ranges between c. 300 m and 5000 m. Two wells, 6-BG-006P-SPS/Corcovado-1 and 4-BG-007-SPS/ Corcovado-2, were drilled in 2009 during the 1st exploration phase of BM-S-52 between January 2006 and 2010. These wells were drilled in water depths of 818 m and 647 m respectively using the Transocean Celtic Sea rig. The pre-salt section had not been drilled in S-M-508 prior to Corcovado-1 and -2 although the prospectivity in the post-salt interval had been tested by wells 1-BSS-74, -75 and -76 which were drilled in 1994. These wells were located using 2D seismic control and targeted Late Cretaceous, post-salt sandstones.
- Published
- 2014
20. Joint Impedance and Facies Inversion – Seismic inversion redefined
- Author
-
James Gunning and Michael Kemper
- Subjects
Regional geology ,Geophysics ,Wavelet ,Engineering geology ,Inverse transform sampling ,Seismic inversion ,Inversion (meteorology) ,Geometry ,Inverse problem ,Petrology ,Geology ,Environmental geology - Abstract
In this paper we will first review the industry-standard simultaneous inversion method (which derives continuous impedances) and subsequently identify some pitfalls. We will then introduce our new Joint Impedance and Facies Inversion technology (which we call Ji-Fi for short in this paper), which overcomes these pitfalls by recasting the seismic inverse problem as mixed discrete/continuous. Having so captured the correct physics, we apply this first on a wedge model, followed by a case study, before drawing some conclusions. Note that in this paper, it is assumed that the seismic to be inverted is an ensemble of true amplitude partial angle stacks with corresponding wavelets derived from well ties.
- Published
- 2014
21. Trends in geothermal: riding the wave to Paris
- Author
-
Marit B. Brommer
- Subjects
Geophysics ,Electricity generation ,Natural resource economics ,business.industry ,Order (exchange) ,Geothermal energy ,Value proposition ,Renewable heat ,Energy mix ,business ,Geothermal gradient ,Geology ,Renewable energy - Abstract
Geothermal energy is a constant and independent form of renewable energy and has the potential to play a key role towards the world’s future clean energy mix. Conventional and unconventional geothermal resources are largely available across all continents and can help countries to become less dependent on energy imports and build a broader base in their future energy mix (Falcone, 2019). However, despite its significant potential, the total contribution of the geothermal sector to global power generation is 0.3% and to renewable heat approximately 4% (REN 21, 2021). Besides the need to create more substantial political pull and the development of a mature geothermal market across continents, there is also a need for a collective ´technology push´ in order to make geothermal mainstream. This paper provides a stock take of geothermal energy globally and points out technology-driven activities and scientific programmes designed to build a strong subsurface energy value proposition in order to mature two scalable markets: power and heating and cooling.
- Published
- 2021
22. Multi-purpose high-resolution seismic acquisition: the deep-sea mining case
- Author
-
Fredrik Andersson, Bent Kjølhamar, Adriana Citlali Ramírez, and James Wallace
- Subjects
Mineral exploration ,Geophysics ,Carbon capture and storage ,Drilling ,Submarine pipeline ,Hydrocarbon exploration ,Deep sea ,Seismology ,Seabed ,Natural (archaeology) ,Geology - Abstract
Summary The oil and gas industry has developed highly sophisticated technology for offshore hydrocarbon exploration. The traditional focus has been on hydrocarbon exploration and production targets. These targets are commonly buried under a few kilometres of sedimentary layers and 3D seismic technology has been the main type of data acquired for characterizing these targets. A secondary focus has been on the shallow section, and it has mostly been driven by shallow hazard investigations to aid the drilling of those targets. This characterization is commonly done with 2D high-resolution seismic referred to as site surveys. In recent years, shallower targets have been sought for carbon capture and storage (CCS). It is best to store carbon dioxide in its critical state which is achieved at burial depths of about 800 m. Thus, the goal is to locate porous rocks with a natural seal at depths of 800 m-1500 m below the seabed. Deeper reservoirs can be used for CCS, but shallower ones are more economical. In addition, offshore mineral exploration is at the point of becoming a commercial activity. To characterize these mineral reservoirs or deposits, the selected type of data needs to resolve the very near surface (first few decametres) at a very high resolution in an efficient way that enables the location of targets with an area extension of 100 to 300 m. Thus, in 2021 3D seismic is aimed at best resolving the very shallow and the very deep. These facts motivated the set of experiments acquired in the AM20-lab in the Norwegian Atlantic Margin in 2020. In this paper, we focus on AM20-lab test 2. While the focus of test 2 is to achieve ultra-high resolution near-surface 3D seismic for mineral exploration, the data provides multipurpose value for medium and deep targets as well. The survey was designed and acquired with a novel signal apparition decasource encoding and was benchmarked against pentasource data from a production multiclient survey which was designed for hydrocarbon exploration
- Published
- 2021
23. Velocity for inversion of 3D seismic surveys
- Author
-
Huw James
- Subjects
Data processing ,Geophysics ,Velocity function ,Inversion (meteorology) ,Geodesy ,Geology - Abstract
In the paper ‘Revisiting Dix’s RMS Velocity’ (James, 2018) it was shown that there is no single velocity function that will accurately correct NMO for all the offsets in a seismic shot record. For the simplest possible model, with more than one layer NMO, velocity is not hyperbolic. Therefore, since the idea that ‘NMO velocities can be approximated by RMS averages of interval velocities’ is pervasive in seismic data processing we need some adjustment to our procedures. The impetus to study RMS velocities came from the practice of muting far offsets when performing AVA/AVO analysis and the far offsets were not flat in time after NMO. With the muting, the information at far offsets was discarded. Historically NMO velocities were picked directly and so it was never necessary to have reflections that were not flat after correction. Muting for NMO was restricted to the region where refractions and or reflections interfered with each other.
- Published
- 2021
24. The effects of lithology and facies types on the anisotropy parameters and upscaling factor of the sand reservoirs in the deep-water Sadewa field, Kutei Basin, East Kalimantan, Indonesia
- Author
-
Ahmad Mulawarman, Ignatius Sonny Winardhi, V.I. Rossa, Dona Sita Ambarsari, Sigit Sukmono, P D Wardaya, Befriko S Murdianto, T A Sanny, Erlangga Septama, Tavip Setiawan, and R.R. Pratama
- Subjects
Lamination (geology) ,Petrography ,symbols.namesake ,Geophysics ,Lithology ,Facies ,symbols ,Reservoir modeling ,Mineralogy ,Anisotropy ,Porosity ,Poisson's ratio ,Geology - Abstract
The anisotropy and upscaling factors adjustment are very important in the proper integration of the core ultrasonic and log sonic data for hydrocarbon reservoir characterization. Related to this issue, this paper evaluates the effects of the lithology and facies types on the anisotropy parameters and upscaling factor. The data are taken from the deep-water oil-gas Sadewa field located in the Kutei Basin, East Kalimantan, Indonesia at water depths of 500–750 m. The main reservoirs in this field are the Upper Miocene sand reservoirs deposited in the upper slope fan and channel facies. Fifty core plugs sampled at depths around 3000–4000 m were collected for thin-section petrography analysis and ultrasonic measurements. The thin-section petrography analysis shows that both facies areas are dominated by greywackes with parallel lamination structure and/or intergranular porosity. The 1 MHz ultrasonic velocities were measured in the 50 core plugs. Meanwhile, 10–40 KHz dipole sonic log data sampled at the same plug’s depth positions were used to calculate the elastic (Vp, Vs, Poisson ratio, etc) and Thomsen’s e, γ and δ anisotropy parameters. The results show that for the parallel lamination and intergranular porosity greywacke samples, e parameter is the best for compensating the anisotropy effect in the integrated core and log data reservoir quality determination (sand shale ratio, percentage of quartz contents, the effective and the total porosities). However, when the samples are non-intergranular porosity greywacke, the anisotropy effect in the core-log data correlation, is too irregular to be compensated by any elastic and anisotropy parameter. The core-log data upscaling factor in the channel facies is much smaller than the upper slope fan facies, and it is in line with the gamma-ray log data which indicates that the sandy channel facies is more homogeneous than the intercalated shale-sand upper slope fan facies. The overall results suggest that the lithology and facies types significantly affect the anisotropy parameters and upscaling factor. In addition, the best elastic or anisotropic parameters) should be determined to minimize the effects in the core and log data integration for hydrocarbon reservoir characterization.
- Published
- 2021
25. AVO modelling and interpretation with the help of the 1.5D elastic wave-equation
- Author
-
P. Doulgeris, P. Haffinger, and A. Gisolf
- Subjects
Geophysics ,Amplitude ,Offset (computer science) ,Scattering ,Isotropy ,Mathematical analysis ,Inversion (meteorology) ,Wave equation ,Elastic modulus ,Seismic wave ,Geology - Abstract
In this paper it is stated that, for the purpose of modelling and interpretation with the help of inversion, the elastic wave-equation is the correct way to link seismic data to the properties of the subsurface, through which the seismic waves are propagating. For Amplitude vs. Offset modelling/inversion the 1.5D data model is used and, therefore, also the elastic wave-equation is presented in this domain. The parameterization of the isotropic elastic wave-equation is directly in terms of elastic moduli, or compliances, which are closer to the desired reservoir properties than the conventional impedances. An iterative solution of the wave-equation is presented. It is demonstrated that the wave equation-based inversion contributes to increased interpretability, particularly in terms of hydrocarbon saturation of the reservoirs. Compared with the results from conventional linearized primary reflectivity-based inversion, it turns out that the wave equation-based method produces property results with a wider spatial bandwidth, due to the fact that it honours all internal multiple scattering and mode conversions generated over the target interval.
- Published
- 2021
26. Application of direct hydrocarbon indicators for exploration in a Permian-Triassic play, offshore the Netherlands
- Author
-
F. Blom and M. Bacon
- Subjects
Glaciology ,Paleontology ,Geophysics ,Stratigraphy ,Engineering geology ,Volcanism ,Economic geology ,Petrology ,Palaeogeography ,Geology ,Environmental geology ,Geobiology - Abstract
This paper describes a case history of exploration in a Permian/Triassic play offshore the Netherlands. The seismic data in the area display clear direct hydrocarbon indicators. The purpose of this paper is to demonstrate how direct hydrocarbon indicators were used for the evaluation of undrilled prospects, in an attempt to determine hydrocarbon presence and hydrocarbon type. Prospect evaluation started with systematic screening of each prospect for acoustically soft amplitude anomalies and flat reflectors. Validation of hydrocarbon presence in these prospects was then attempted through the use of spatial filtering techniques for enhancing flat reflectors, and the use of a rock physics model, derived from well data, for the explaining observations. The type of hydrocarbons expected in each prospect was predicted using a comparison of forward modelling results and actual seismic observations. As a result of this approach, exploration drilling success has improved. Three recently drilled exploration wells all discovered hydrocarbons in previously validated prospects. We think that successful detection of hydrocarbons from seismic data is possible in this area due to favourable rock and fluid properties, reservoir thickness generally being greater than hydrocarbon column height, and the modest depth at which the reservoirs occur.
- Published
- 2009
27. Subsurface correlation in the Upper Carboniferous (Westphalian) of the Anglo-Dutch Basin using the climate stratigraphic approach
- Author
-
S.D. Nio, M. de Jong, A.R. Böhm, and D. Smith
- Subjects
Regional geology ,Paleontology ,Geophysics ,Carboniferous ,Cyclostratigraphy ,Economic geology ,Stratigraphy (archaeology) ,Petrology ,Palaeogeography ,Geology ,Environmental geology ,Geobiology - Abstract
The Upper Carboniferous play in the Anglo-Dutch offshore continues to be a challenging exploration target, as well as providing a significant portion of the region’s existing gas production. Despite the application of various techniques, a secure genetic stratigraphic framework, applicable at field as well as regional level, has continued to be elusive. This paper presents such a framework, using only routine wireline log data supported by the limited amount of published stratigraphic information. In addition to their conventional properties, wireline logs conceal information encoded in their waveform properties. Treated as complex waveforms, logs are amenable to a range of ‘time-series’ analytical methods. Applying the concepts of global cyclostratigraphy (Perlmutter et al., 1990, 1998), so-called spectral trend (or INPEFA) curves, which show uphole changes in the waveform properties of the data, are used to generate a framework of near-synchronous well-to-well correlations. Using this approach, we have subdivided the top part of the Carboniferous succession – essentially the Westphalian – into nine first-order stratigraphic packages, W1000 to W9000 from bottom to top. Most of these packages can be further subdivided into second-order packages, and some of these into third-order packages, taking the resolution of this scheme down to a few tens of metres, or even to a few metres. These packages have been identified in a total of over 50 wells in the offshore UK and offshore Netherlands sectors.Comparison with the limited information publicly available on previous stratigraphic classifications indicates that our scheme is far more widely applicable, and probably considerably more reliable than any other previously attempted at the regional scale. Also, the scheme has the potential for further subdivision, to the limit of resolution of the log data, at the local (field/reservoir) scale. As our subdivisions are inherently time-related, they will now serve as the most appropriate framework within which to understand basin paleogeographic development, and the distribution of reservoir and seal facies within the Upper Carboniferous. The purpose of the paper is two-fold. First, we present our stratigraphic scheme and the method of climate stratigraphy upon which it is based. Second, we show how systematic application of this method in well-to-well correlations leads to the identification of important intra-formational unconformities.
- Published
- 2007
28. Report on EAGE Vienna 2006 workshop on marine multi-azimuth seismic
- Author
-
P. Fontana and T. Summers
- Subjects
medicine.medical_specialty ,Geophysics ,Telmatology ,medicine ,Geology ,Metamorphic petrology ,Seismology ,Construction engineering - Abstract
The EAGE conference in Vienna, June 2006 was the venue for a workshop on one of the most interesting developments in seismic technology in recent years. The subject was multiazimuth and wide azimuth seismic surveying in the marine environment, with emphasis on its application in deepwater environments. The timing of this workshop was particularly appropriate with the introduction, since 2004, of several new technologies and methodologies to image below complex overburden in major deepwater oil and gas provinces. The workshop brought together a range of presenters from E&P companies and seismic contractors to present the state of play in the technology and discuss challenges for the future. It was the first of several sessions on the subject in 2006. The EAGE workshop was followed by the SEG/EAGE Summer Research Workshop on subsalt imaging and a special session on multi-azimuth and wide azimuth seismic at the SEG convention in October. Below is a summary of the EAGE Multi-Azimuth workshop as an introduction to the papers in this special edition of First Break. The perspectives provided in the two authors’ keynote papers as coordinators of the workshop form the basis of this article.
- Published
- 2007
29. WSGF — Time for change part 2: Forces and energies acting on the seismic source and earth systems
- Author
-
Spencer L. Rowse
- Subjects
Earth system science ,Geophysics ,Seismic vibrator ,Exploration geophysics ,Dynamite ,law ,Acoustics ,Energy transfer ,Units of energy ,Impulse (physics) ,Weight drop ,Geology ,law.invention - Abstract
In land geophysical exploration an impulsive or vibratory source, operating at the earth’s surface, is used to generate the seismic signal during the field acquisition. Comparisons of the manufacturer’s output specifications will often be a factor in selecting a seismic source for ‘good S/N’ (signal to noise ratio) in a particular survey area. For impulsive sources, such as dynamite, weight drop, and accelerated weight drop (AWD), the source specifications are given in energy units (Joules, foot-pounds) whereas vibroseis sources are rated in force units (Newtons, pounds-force). For either type of source, the manufacturer’s specifications are the output ‘power’ of the source mechanism and do not take into account any inefficiencies/losses in energy in the source mechanism or the interacting earth volume when generating the seismic signal. In this paper, I examine some of the factors that determine the transfer of energy from the source mechanism to the seismic signal and show, for AWD sources, the energy transfer during the impulse can be greatly improved by optimization of the source parameters. For vibrators, the energy transfer to the ground may be a better indicator of the propagating wave than the current WSGF estimates.
- Published
- 2021
30. Mineral exploration using modern data mining techniques
- Author
-
Colin T. Barnett and Peter M. Williams
- Subjects
Mineral exploration ,Geophysics ,Quantitative analysis (finance) ,Artificial neural network ,Process (engineering) ,Supervised learning ,Unsupervised learning ,Data mining ,Set (psychology) ,computer.software_genre ,computer ,Geology ,Visualization - Abstract
Colin T. Barnett and Peter M. Williams discuss how new data mining concepts such as visualization and probabilistic modelling can provide the key to improved exploration success in the mining industry. The article is a slightly abridged version of a contribution to the latest special publication from the Society of Economic Geologists, which this year celebrates its 100th anniversary. Following an analysis of recent performance of the gold industry, Schodde (2004) concludes that gold exploration is currently only a break-even proposition. In the last 20 years, the average cost of a new discovery has increased nearly fourfold, and the average size of a deposit has shrunk by 30%. The average rate of return for the industry has been 5–7 %, which is the same order as the cost of capital. Why should this be, and what can be done about it? Paterson (2003) observes that, historically, discoveries have taken place in waves, after the introduction of new methods or advances in the understanding of ore genesis. For instance, discovery rates jumped sharply between 1950 and 1975, following the development of new methods and instruments in exploration geophysics and geochemistry. In the last quarter century, there has been a comparable surge in digital electronics and computing that has resulted in a great increase in the quality and quantity of exploration data. Yet these developments, on their own, evidently were not sufficient to reverse a downward trend in the discovery rate during this period. So where should we look for new methods to drive the next wave of discoveries? It seems we are now collecting data faster than we can absorb it. But this is also true in bioinformatics with genome sequencing, or in making sense of other huge corpora of data available on the Internet. It is the thesis of this paper that the new methods are to be found in ways currently being developed for extracting meaningful information from data. Specifically, we should look to recent developments in visualization and data mining. Many data mining techniques are inspired by analogy with human intelligence and suggest a new idea of computing. Conventional computing is restricted to tasks for which a human can find an algorithm. Living creatures, however, are programmed by experience, not by spelling out every step of a process. Data mining is therefore about discovering how machines, like humans, might learn from data. Machine learning broadly distinguishes between supervised and unsupervised learning. Supervised learning, or learning from examples, requires sufficiently many labelled cases to be available. These form a set of known inputoutput pairs, usually called a training set, and the task is to learn the true input-output mapping from these examples. In the exploration case, the training set typically consists of a collection of known deposits and known barren regions. For unsupervised learning, we know only the inputs and not the corresponding outputs. The aim, then, is to search for ‘interesting’ features of the data, such as clusters or outliers, or for some latent structure which would account for how they were generated. In this paper only the case of supervised learning is considered, but see Williams (2002) for some further discussion of both approaches. The paper begins with a review of recent advances in visualization and supervised learning techniques, such as neural network models. The use of these ideas is then demonstrated in a study of gold exploration in the Walker Lane in the western United States. Finally it is shown how the results can be applied to a quantitative analysis of exploration risk, and how improved targeting accuracy can reduce exploration costs and increase the probability of success.
- Published
- 2006
31. Integration of 4D seismic into the dynamic model: Girassol, deep offshore Angola
- Author
-
D. Dubucq, J.A. Jourdan, and F. Lefeuvre
- Subjects
Regional geology ,Geophysics ,Engineering geology ,Submarine pipeline ,Economic geology ,Geomorphology ,Igneous petrology ,Palaeogeography ,Seismology ,Geology ,Environmental geology ,Geobiology - Abstract
In the deep-water environment of West Africa, 3D seismic information is a key factor for exploration, appraisal, development, and monitoring of hydrocarbon fields. To obtain a better understanding of the reservoir, modeling and simulation work was performed, demonstrating that high-resolution (HR) seismic data would provide a more accurate image of the reservoir. The results of this HR survey (Beydoun et al., 2002) had a significant impact on the definition of the Girassol reservoir model by permitting clearer identification of the stacked turbidite channels (Navarre et al., 2002). Girassol was discovered in 1996 off Angola, in water depths up to 1400 m. The field was initially close to the bubble point pressure with no gas cap. After three appraisal wells, the decision was made to launch a fast track development. 4D HR seismic was planned as a reservoir-monitoring tool. The 4D HR data currently consists of a base 3D HR seismic survey shot in 1999 and a monitor 3D HR shot in the two last weeks of 2002. Many people usually think about 4D at the end of a field life, as a way to identify possible by-passed oil. Nevertheless it is now frequently used to acquire 4D early in the field life for monitoring purposes (Goto et al. 2004). In the Girassol case, it was decided to shoot the repeat 3D HR survey after only one year of production and about six months after the start of gas injection. The first reason was to monitor the effect of gas injection in an extremely heterogeneous turbidite environment. The second reason was that, in such a deep offshore environment, the monitoring through re-entry for log measurement is prohibitively expensive. The first results confirmed the ability of 4D to contribute to field monitoring (Dubucq et al., 2003) only four weeks after the last shot had been completed (Lefeuvre et al., 2003). Therefore, based on the excellent quality of the 4D response and on further processing, it was decided to incorporate information from the 4D data into an updated reservoir model. This was done qualitatively in a first phase, which is described in this paper. However, the ultimate goal to use seismic-derived saturation and pressure change distributions to constrain the reservoir model during the history matching process (Gosselin et al., 2003) is not addressed in this paper.
- Published
- 2006
32. 3D geologic modelling of channellized reservoirs: applications in seismic attribute facies classification
- Author
-
Renjun Wen
- Subjects
Regional geology ,Geophysics ,Relation (database) ,Engineering geology ,Facies ,Seismic attribute ,Geologic modelling ,Petrology ,Palaeogeography ,Geology ,Environmental geology - Abstract
Renjun Wen, president and CEO, Geomodeling Technology, presents a new methodology for modelling stratigraphic heterogeneity in channellized reservoirs. Geological models are usually used qualitatively in seismic interpretation. This paper illustrates that quantitative representations of detailed geological models can significantly enhance seismic attribute interpretation through facies classification. When applying seismic attribute classification to reservoir facies mapping, one often faces such typical questions as: ■ Which attributes should be used as input to classification? ■ How many classes should be used in the unsupervised classification method? ■ How many levels of hierarchy should be selected in the hierarchical classification method? ■ Does the seismic facies correspond to the geological facies? ■ How can attribute-derived facies models be validated? There are no unique and easy answers to the above questions. In this study, we aim to create a more accurate representation of the reservoir by using 3D synthetic Earth models to guide seismic attribute classification. We consider a channellized reservoir for which seismic attribute analysis has proven to be very useful, but results can be difficult to interpret. The next section describes a 3D stratigraphic modelling approach for the channellized reservoir. The major channel components and parameterizations are illustrated with examples. This is followed by a summary of seismic attribute analysis and classification workflow applied to a synthetic seismic volume. Results of attribute classifications using a self-organized map (SOM) (Kohonen, 1989) and waveform correlation maps are compared in relation to different input attributes and classification parameters. The lessons learned from this synthetic example are summarized and the selection of attributes for facies classification is discussed. 3D stratigraphic models of channellized reservoirs There are several computer-based methods to build 3D reservoir model flow simulations, such as object-based methods or cell-based geostatistical methods (Dubrule and Damsleth, 2001). However, none of these methods are able to reproduce stratigraphic heterogeneity patterns at sub-seismic scale, which can be major controlling factors for fluid flow and spatial variations of acoustic properties. In this paper we report a new modelling method to generate 3D stratigraphic architectures of channellized reservoirs. The method is an extension of the bedding structure modelling method developed by Wen et al. (1998) and is being further developed in the SBED Joint Industrial led by Geomodeling Technology. The stratigraphic features within channellized reservoirs to be modelled in this study are below the resolution limit of conventional seismic data. The cell size is about 20 x 20 x 1 m3. At such a modelling scale, detailed geological features must be modelled, based on their formation process so that their 3D structures can be correctly represented in the geological model.
- Published
- 2005
33. A complex 3D volume for sub-basalt imaging
- Author
-
F. Martini, Richard T. Single, C.J. Bean, and Richard Hobbs
- Subjects
Regional geology ,Engineering geology ,Volcanism ,computer.software_genre ,Synthetic data ,Geophysics ,Data mining ,Petrology ,computer ,Palaeogeography ,Geology ,Seismic to simulation ,Environmental geology ,Data integration - Abstract
Thick successions of basalt and basaltic-andesite lavas flows were extruded during continental break-up and they cover pre-existing sedimentary basins often of interest for hydrocarbon exploration. With conventional seismic acquisition and processing methods, it is difficult to image both the internal architecture of the volcanic succession as well as the underlying sub-basalt structure. The use of synthetic data can help us to understand the poor sub-basalt imaging quality and to develop effective acquisition and processing approaches useful for real data. Moreover, non-seismic methods have been successful in improving understanding of overall geometries of sub-basalt targets. Therefore, integration of seismic and non-seismic data seems to yield promising results and needs to be explored further. From all these considerations, the necessity of a realistic 3D basalt model that would allow simulating realistic seismic and non-seismic data, on one hand to test seismic acquisition and processing techniques, and on the other to develop strategies for geophysical data integration into a common methodology to overcome the sub-basalt imaging problem. A complex 3D model was built adapting all the information available from interpretation of seismic data, log data, gravity data, and geological observation. Seismic and non-seismic synthetic data have been produced on the model. In this paper we present the methodology to develop the 3D model as well as the initial results from data simulations. The model and the data are available to the public, through the authors of the present paper.
- Published
- 2005
34. Seismic processing: past, present and future
- Author
-
M. Roth
- Subjects
Data processing ,Workstation ,business.industry ,Information technology ,Data science ,Field (computer science) ,Visualization ,law.invention ,Geophysics ,Workflow ,law ,business ,Raw data ,Massively parallel ,Seismology ,Geology - Abstract
Murray Roth, executive vice president of marketing & systems, Landmark Graphics, explains how seismic data processing will have to adapt to the looming scenario of a seriously shrinking pool of expertise facing an explosion in the growth of data volumes. Since the dawn of digital seismic processing more than 40 years ago, the basic processing workflow has changed very little, despite huge strides in the evolution of computing hardware and algorithms. Seismic processors still follow time-honoured steps to take field data through static fractions, noise attenuation and parameter selection, ultimately ending up with a migrated, stacked image of the subsurface. Unfortunately, industry resources are shrinking while data volumes are exploding. Revolutionizing the way seismic processors do their daily work is not only desirable, but essential. This article reviews where we’ve come from, where we’re at today, and where we can reasonably expect to go in the near future based on emerging innovations in information technology. Origins of seismic processing The first real computer application in the oil patch was seismic data processing. During the 1950s, analogue seismic surveys were acquired, processed and interpreted in the field by a single individual known as a ‘human computer.’ He laid out the survey lines and supervised the shoot by day. At night, he ‘processed’ the analogue shot records, marked subsurface reflectors of interest, hand-timed and plotted each trace on paper, and drew a rough cross section looking for structural highs, to recommend for drilling. By the late 1950s, analogue seismic records were converted into digital form using big number-crunching computers (Figure 1). To do this, of course, petroleum companies had to pull the seismic processing step out of the field and move it into a centralized computer facility. Now field crews recorded data on magnetic tape, sent them to a processing centre where raw data were turned into clean paper sections, which were forwarded to an office somewhere else where seismic interpreters focused on mapping structures and identifying prospects. The original, unified prospect generation ‘value chain’- from seismic acquisition through processing to interpretation - although enhanced by new technology was nevertheless fragmented into increasingly isolated specialties. Only in recent years have they begun to reunite, through even more advanced information technologies. From the 1960s through the late 1980s, batch seismic processing sequences were executed overnight on mainframe computers. In 1990s, certain parts of the processing workflow moved onto a range of powerful new computers, from interactive workstations to massively parallel supercomputers. During that decade, interpreters also adopted increasingly sophisticated computer systems for the analysis and visualization of processed seismic data.
- Published
- 2004
35. Soft computing for qualitative and quantitative seismic object and reservoir property prediction. Part 3: Evolutionary computing and other aspects of soft
- Author
-
Fred Aminzadeh
- Subjects
Soft computing ,Paleontology ,Geophysics ,Property (philosophy) ,Artificial life ,Reservoir modeling ,Stratigraphy (archaeology) ,Hydrocarbon exploration ,Object (computer science) ,Data science ,Geology ,Evolutionary computation - Abstract
This is the third part of the series of review papers on soft computing applications in the petroleum industry. In this paper we will focus on evolutionary computing, with related topics such GA (genetic algorithms), genetic engineering, genome, DNA, artificial life and emergence intelligence. We will begin with a brief overview of evolutionary computing technology. We will also give an overview of GA in exploration and production (E&P). We will then highlight some recent applications of genetic algorithms in various aspects of hydrocarbon E&P. These include applications in production optimization, reservoir characterization, and permeability prediction. We will also propose a framework for the more effective use of GA (Genome) as well as likely applications of ‘complexity theory’ in seismic exploration. Introduction Evolutionary computing techniques cover a large spectrum of related technologies. Among them are: GA, genetic engineering, genome, DNA and emergence intelligence. These technologies are already having a profound impact on many areas. Most notably, human genome has already found practical applications in life sciences (e.g. medicine and pharmaceutical industry). Figure 1, from the US Department of Energy’s human genome initiative shows the link between DNA and life. Some of these methods have been used on their own or in conjunction with other soft computing methods in many aspects of geosciences and hydrocarbon exploration and production problems. Most GA algorithms have been used as a means of efficient optimization. They also have been used to discover and extract knowledge or rules, especially when a large body of information has to be searched. Limited applications have used genome or DNA type concepts to categorize rock formations, recognize seismic patterns or describe the sedimentation process. These are the most promising application of evolutionary computing for hydrocarbon exploration. The rest of this introductory section gives an overview of evolutionary computing. It also gives an overview of some of these methods in the petroleum industry. The rest of the paper highlights some recent applications in E&P, and provides a brief description of complexity theory which is expected to have many applications in exploration.
- Published
- 2004
36. Restoring the seismic image with a geological rule base
- Author
-
D. Hodge, S. Bland, and P. Griffiths
- Subjects
Regional geology ,media_common.quotation_subject ,Engineering geology ,Context (language use) ,computer.software_genre ,Paleontology ,Geophysics ,Salient ,Section (archaeology) ,Conceptual model ,Data mining ,Stratigraphy (archaeology) ,computer ,Geology ,Environmental geology ,media_common - Abstract
Stuart Bland, Paul Griffiths and Dan Hodge of Midland Valley Exploration, Glasgow, Scotland discuss a new conceptual model for understanding the development of structures. This paper presents a technique that optimises the use of seismic data through manipulation of the image using a geological rule base. The approach can be readily used in routine interpretation and saves time by quickly focusing effort on fruitful interpretational models and by increasing confidence in picking in poor data areas and in complex structure. Seismic imaging is a primary source of information used in the exploration of hydrocarbons. Analogies have been drawn between the uses of seismic in exploration and production (E&P) and that of medical imaging in healthcare. There are similarities in the core functions of the seismic interpreter and the radiologist: both rely on high resolution images as 2D sections or 3D models to reveal what can’t be observed directly. In both disciplines the key aims are to note salient features in order to produce an accurate diagnosis of the situation and to advise others of their conclusions. Neither driller nor surgeon will appreciate surprises and will expect to have been appraised of critical factors and potential risks. The surgeon is interested in the location and most efficient route. Likewise the drilling engineer in the hydrocarbon accumulation needs to define the target. However, in both scenarios, whatever leading-edge technology is applied, the outcome is dependent on the interpretation of the data that, until drilling or surgery, remains an estimation of reality. At a fundamental level in hydrocarbon exploration, the interpretation of data is the product of a continual stream of decisions - ‘What does the horizon look like?’, ‘Can it be correlated across faults?’, ‘Where do the faults terminate?’, ‘Are the faults linked?’, ‘Is the horizon folded?’ One technique available to help the geophysicist is to flatten the seismic on key marker horizons. This is the digital version of the interpreter taking a folded paper section and overlaying one part on another to check character and correlation. Taking this technique a stage further we can use it to mimic simple deformations where flat-layered rocks become folded or faulted. Since horizons are both spatial and temporal objects – they are defined by geometry and age - horizon flattening can reveal significant features present at a particular time. Unfortunately this process has a number of drawbacks that require the interpreter to overlook distortions in the image, artefacts of the flattening process. These artefacts can arise where the horizon is interpolated across a fault or more generally because the flattening does not replicate the deformation observed in the section. These artefacts can significantly mislead the interpreter if not recognised. Where the medical doctor can refer to records to gain an insight into the patient’s medical history, the geologist can restore the section to understand its evolution. By using structural restoration to sequentially remove the effects of sediment compaction, isostatic adjustment faulting and fault-related folding that have altered the present-day section since deposition, we have a geologically valid way of looking at the history of the development of our structure while referencing the seismic image of the present day. Structural validation aids the decision process between alternative interpretations by testing the results within the framework of our understanding of geological history and evolution. Inclusion of the seismic enables validation of the geohistory within the context of the data. Three case studies are presented to illustrate the techniques involved in restoring the seismic image and the bene- fits from adopting this approach. Each case study has a distinctive setting, characteristic, key issues and associated risks. The first example is set within an extensional fault system of the Gullfaks, northern North Sea and depicts an untested interpretation. The second, an inverted series of half Grabens in the southern North Sea, typifies the problem of degrading seismic quality at depth. The final case study is taken from a Foreland Thrust basin in the Alberta Foothills, Canada. Each example demonstrates an enhanced level of detail and reduced risk of error in the final interpretation from apparently simple structures.
- Published
- 2004
37. Gas reserves and reservoir trends in The Netherlands
- Author
-
A.A. van de Weerd
- Subjects
Hydrology ,Consumption (economics) ,business.industry ,Distribution (economics) ,Agricultural economics ,Natural gas field ,Product (business) ,Geophysics ,Position (finance) ,Production (economics) ,Perpetuity ,Oil field ,business ,Geology - Abstract
The Netherlands is one of the largest gas producers in Europe with about 71.2 Bcm (2.51 Tcf) of gas produced during the year 2002, half of which is exported, supplying Europe with about 20% of its total consumption. Significant gas production in The Netherlands started in 1965, reached a peak during 1976 with 101 Bcm (3 Tcf) and by year end 2002, a total of 2622 Bcm (92.6 Tcf) had been extracted. Currently the government limits yearly gas production to 80 Bcm, although the rate in the past four years has been well under this limit. Early exploration was started before the Second World War by a precursor of NAM (the 50/50 Shell-Exxon joint venture active in The Netherlands) and in 1943 the Schoonebeek oil field was found followed by Coevorden gas field in 1951. In 1959, the giant Groningen gas field was discovered by NAM, changing the face of exploration in The Netherlands. Since that time NAM has retained its dominant position as operator controlling about three quarters of annual gas production and reserves. Public domain access of data on gas and oil fields in The Netherlands is scarce, with pre-2003 legislation allowing operators to hold tight all information on onshore wells. Since most onshore concessions are large, held over long periods of time (several in perpetuity), and have no relinquishment obligations, many fields have been found and are in production without any data entering the public arena. Offshore, pre-2003 legislation required operators to release well data after 10 years. Offshore concessions are relatively large and many contain several producing fields for which no or little data is in the public domain. Following the new legislation in force since 1 January 2003 detailed production data and all on- and offshore wells and seismic data older than 5 years have to be released. However, reserve figures of fields are not in the public domain. Reserve data submitted by the operators remain confidential for a period of 10 years and in the absence of any legal requirement operators use this advantage. On a yearly basis, the government publishes general estimates on reserves, but not for individual fields. Data on the current reserve distribution for fields, reservoir trends and for licence holders are not easily ascertained. This paper attempts to fill this information gap by reviewing reserves whether these are in developed or undeveloped fields. Reserves are oil or gas volumes that can be commercially recovered at current economic conditions, industry practices and government regulations. Developed reserves are those being produced today while undeveloped fields are not yet on stream. Remaining reserves are those left in a producing field. Initial (or ultimate) reserves of a field are the cumulative production plus the remaining reserves. Because the technical data of the fields are not available, the different reserves categories are used here informally and without strict definitions. In this paper a difference has been made between the giant Groningen gas field and the other fields that are collectively called the ‘Small fields’. This term is used in a comparative sense with the Groningen field as the measuring stick, and is applied due to the Small Field Policy in The Netherlands, which allows operators priority to bring small gas fields immediately on stream in lieu of production from the Groningen field. Exploration for and production from small fields is therefore not constrained by demand for gas. Moreover because the Groningen field acts as a swing producer, small fields can produce with high load factors. The Small Field Policy sets production constraints on the Groningen field. In addition the gas price is kept relatively high because it is linked to a basket of oil product prices. Without these production constraints and the linkage of the gas price to that for oil, the low production costs and high production capacity of the Groningen field would result in low gas prices for Western Europe and most small fields in The Netherlands and surrounding countries would have been uneconomic during much of the lifetime of the Groningen field.
- Published
- 2004
38. Soft computing for qualitative and quantitative seismic object and reservoir property prediction. Part 1: Neural network applications
- Author
-
Fred Aminzadeh and P. de Groot
- Subjects
Soft computing ,Focus (computing) ,Paleontology ,Geophysics ,Artificial neural network ,Property (programming) ,Reservoir modeling ,Object (computer science) ,Data science ,Fuzzy logic ,Geology ,Object detection - Abstract
Fred Aminzadeh and Paul de Groot of dGB Earth Sciences begin a major series of three articles on the increasing use of soft computing techniques for E&P geoscience applications, focusing first on how neural networks can enhance seismic object detection. Soft computing has been used in many areas of petroleum exploration and development. With the recent publication of three books on the subject, it appears that soft computing is gaining popularity among geoscientists. In this paper we focus on one aspect of soft computing: neural networks, in qualitative and quantitative seismic object detection. In subsequent papers we will review other aspects of soft computing in exploration. Highlighted here will be the role neural networks play in combining different seismic attributes and effectively bringing together data with the interpreter’s knowledge to decrease exploration risk in four categories (geometry, reservoir, charge and seal). Three new books in the general area of soft computing applications in exploration and development, Wong et al (2002), Nikravesh et al (2003) and Sandham et al (2003) represent a comprehensive body of literature on recent applications of soft computing in exploration. Soft computing is comprised of neural networks, fuzzy logic, genetic computing, perception- based logic and recognition technology. Soft computing offers an excellent opportunity to address the following issues: ■ Integrating information from various sources with varying degrees of uncertainty ■ Establishing relationships between measurements and reservoir properties ■ Assigning risk factors or error bars to predictions. Deterministic model building and interpretation are increasingly replaced by stochastic and soft computing-based methods. The diversity of soft computing applications in oil field problems and the prevalence of their acceptance can be judged by the increasing interest among earth scientists and engineers. Given the broad scope of the topic, we will limit the discussion in this paper to neural network applications. In subsequent papers we will review other aspects of soft computing, such as fuzzy logic in exploration. Neural networks have been used extensively in the oil industry. Approximately 10 years after McCormack’s review (1991) of neural network applications in geophysics, much work has been done to bring such applications to the main stream of geophysical interpretation. Some of these efforts are documented in Wong et al (2002), Nikravesh et al (2003) and Sandham et al (2003) which include many papers and extensive references on neural network applications. Most of these applications have been in reservoir characterization, seismic object detection, creating pseudo logs, and log editing. In the next section, we will focus on two general areas of applications of neural networks. This will include qualitative methods with the main aim of examining seismic attributes to highlight certain seismic anomalies without having access to very much well information. In this case neural networks are primarily used for classification purposes. The second category involves quantitative methods where specific reservoir properties are quantified using both seismic data and well data, and neural networks serve as an integrator of the information.
- Published
- 2004
39. Integration loop of of ‘global offset’ seismic, continuous profiling magnetotelluric and gravity data
- Author
-
P. Dell‘Aversana
- Subjects
Regional geology ,Offset (computer science) ,Synthetic seismogram ,Geophysical imaging ,Engineering geology ,computer.software_genre ,Geophysics ,media_common.cataloged_instance ,Data mining ,European union ,computer ,Seismology ,Geology ,Data migration ,media_common ,Environmental geology - Abstract
In this paper an approach aimed at integrating many different geophysical/geological data is discussed. It is based on a recursive process of forward and inverse modelling of seismic and non-seismic data. The experimental data set of the Enhanced Seismic In Thrust belts (ESIT) research project, funded by Eni E&P, Enterprise Oil Italiana and the European Union, was used in order to apply the approach to a real case of complex geological setting. Near-vertical reflection seismic, long-offset seismic, high-resolution magnetotelluric, gravity, borehole and surface geological data were involved in the process. We demonstrate how a ‘self-feeding’ integration loop is an efficient way to produce a unique geophysical model that responds to several basic requirements, such as optimized inversion/modelling in each parameter space, best fit in each geophysical domain, best seismic imaging, reliable geological meaning and cost/benefit ratio optimization. Introduction Integration of multiple data sets in complex geological settings represents one of the most challenging objectives in geophysics. This is true especially in the case of geophysical projects based on the acquisition of highly redundant data sets and characterized by many different sources of information. In fact, particularly in complex areas where the quality of standard seismic is poor, alternative non-seismic approaches are required. An exploration strategy based on many different and complementary methodologies always produces a complex data set, and integrating all the information in consistent and reliable models can fail if an appropriate integration strategy is not applied. In previous work (Dell'Aversana & Morandi 2000, 2002; Dell'Aversana 2001), we introduced an integration approach based on recursive forward and inverse modelling of seismic, magnetotelluric and gravity data. We showed that the so- called ‘global offset’ seismic approach (which also includes high-fold, long-offset data) can improve significantly the process of building reliable models by a quantitative integration with MT and gravity data and with the support of borehole information. In a subsequent paper (Dell'Aversana et al. 2002b), we demonstrated how, by applying prestack depth migration to global offset data, it is possible to improve the depth imaging in difficult geological settings, also when the S/N ratio of the conventional near-vertical reflection data is very low. This result can be obtained especially if non-seismic data are used for defining accurate multi-parametric models. These models can contribute to the definition of an appropriate velocity field for seismic data migration, as will be clarified in this paper. In fact, recent experiments and applications showed how the continuous profiling magnetotelluric method can produce reliable resistivity sections that can support both the velocity field definition and the geological interpretation in case of low-quality seismic sections (Zerilli & Dell'Aversana 2002). Here, we continue the discussion about the integration of many data sets, but also introduce several important additional concepts. We take the opportunity offered by the large multiple data set collected during the ESIT research project (Buia et al. 2002; Dell'Aversana et al. 2002b). We demonstrate that an appropriate quantitative integration of global offset seismic, continuous profiling high-resolution magnetotelluric (HRMT), gravity, borehole and geological data is a reliable and cost-effective process. Each source of information contributes to different aspects of the process, due to the varying benefits and limitations of each of the different methodologies used. Based on many different geophysical parameters, the final result is a well-calibrated model that is consistent with the best seismic imaging. The geological consistency is considered as a fundamental requirement at each step of the process.
- Published
- 2003
40. Envionmental applications of airborne radiometric surveys
- Author
-
M. Lahti and S. Kapotas
- Subjects
Hydrology ,business.industry ,Coal mining ,chemistry.chemical_element ,Radon ,Uranium ,Geologic map ,Nuclear reprocessing ,Geophysics ,chemistry ,Mining engineering ,Environmental monitoring ,Geological survey ,business ,Nuclear weapons testing ,Geology - Abstract
Airborne gamma-ray measurements, primarily developed for uranium exploration (e.g. Bristow 1983), have many applications in environmental monitoring and geological mapping (e.g. IAEA 1991; Jaques et al.1997; Wilford et al. 1997). For example, they have been used to map radioactive fallout from nuclear accidents (e.g. Mellander 1989) and contaminant plumes from power plants (e.g. Rangelov et al. 1993) as well as monitoring the impact of uranium mining. Airborne measurements can be used for quick and effective mapping of large areas. Although the gamma-ray measurements record variations in the radioactivity of a relatively shallow surface layer (c. 0.3 m), the results are useful for both regional and targeted surveys. The results are normally presented as total gamma activity and as equivalent ground concentrations of uranium, thorium, potassium and other radionuclides (e.g. 137Cs) or as ratios (including ternary K, U, Th plots). Methods have been developed to separate natural and man-made radioactivity. This paper presents some examples of environmental applications of airborne gamma-ray surveys from Germany and the UK. In both cases, the results of multisensor (radiometric, electromagnetic and magnetic) airborne surveys are validated by ground measurements and sampling. The AERA project (Assessment of environmental risks by airborne geophysical techniques validated by geophysical field measurements) was an EC-funded project in south-east Germany, coordinated by the Geological Survey of Finland (GTK) (Gaal et al. 2001). A 1100 km2 site in the Zwickau area of Saxony was selected because of its extensive mining and industrial activities over several centuries. Uranium, black coal and nickel mining and smelting, military activities, modern heavy industry, and industrial and domestic waste have caused considerable environmental impacts. The HiRES-1 survey of central Britain was carried out to assess a range of environmental and resource applications of airborne survey data. The 14000 km2 area surveyed encompasses a wide range of rock and soil types. The area includes many urban centres and has a long history of extractive and manufacturing industry. There are regions with relatively high indoor radon levels and areas contaminated by fallout from nuclear weapons testing, the Chernobyl accident and discharges from the Sellafield nuclear fuel reprocessing plant. Small targets within the HiRES-1 area were flown as part of a collaborative GTK-BGS (British Geological Survey) project to investigate specific sites in more detail. These included landfills, colliery spoil heaps and gravel workings used for power station fly ash disposal. Although a multisensor approach was adopted for these projects, this paper concentrates on the environmental applications of airborne gamma-ray data.
- Published
- 2003
41. Applications of ground penetrating radar in the Three Gorges Project, China
- Author
-
Lian Jijian and Li Zhangming
- Subjects
geography ,geography.geographical_feature_category ,Process (engineering) ,media_common.quotation_subject ,Excavation ,Geophysics ,Work (electrical) ,Mining engineering ,Hydroelectricity ,Ground-penetrating radar ,Quality (business) ,Levee ,Scale (map) ,Geology ,media_common - Abstract
During the Three Gorges Project (TGP) on the Yangtze River in China, a number of complicated engineering and geological problems had to be solved. A quick and high resolution non-destructive method was needed to find buried geological defects and engineering quality problems in order to guarantee the project schedule and engineering quality. Ground penetrating radar (GPR) was one technique which was effectively applied to detect the extension of inhomogeneous weathering of granite prior to excavation, to map the extent and attitude of large faults and weathering of alternative layers, and to check the engineering quality of concrete work for TGP during construction. In this paper, three case histories are discussed.nThe detection results proved consistent with the results of excavation with the performance of GPR obvious. As a large scale, high technology and difficult construction, The Three Gorges Project(TGP) is the largest and most ambitious infrastructure project in the world. Geological exploration and surveys have been conducted for more than 30 years providing a wealth of data prior to construction. Like other large hydroelectric engineering projects, the TGP was faced with many complicated engineering, geological and technology problems during construction, such as outlining inhomogeneous weathering layers in granite during excavation, determining the occurrence of large faults and weathering of alternative layers, detecting the position of seepage in the embankment, checking the engineering quality of the work, and so on. If these problems could not be rapidly solved during construction, they would delay the construction process and affect construction quality. As a high resolution non-destructive detection technique, GPR was able to rapidly and economically solve some of the problems encountered during TGP construction which other geophysical methods could not accomplish. Through test, research and practical application, successful results were obtained using GPR to solve the engineering problems mentioned. In this paper, three case histories are presented. The first shows the delineation of inhomogeneous layers in granite, the second illustrates the definition of faults and weathering of alternative layers, and the third demonstrates the checking of the engineering quality of work for the TGP. The detection results were confirmed by on-site inspection.
- Published
- 2003
42. The implications of anisotropy for seismic impedance inversion
- Author
-
P. Lamy, Dominique Marion, Peter Swaby, Richard Eden, Paul Williamson, and P. S. Rowbotham
- Subjects
Regional geology ,Geophysics ,Engineering geology ,Well logging ,Seismic inversion ,Inversion (meteorology) ,Economic geology ,Anisotropy ,Seismology ,Seismic to simulation ,Geology - Abstract
Seismic inversion is an established technique for deriving acoustic impedance (AI) from seismic data using well log information as a low frequency constraint. However, within anisotropic strata e.g. shale, velocity logs measured in deviated wells will typically exhibit higher velocities than would have been recorded in vertical boreholes (Furre and Brevik 1998; Hornby et al. 1999). If this effect is not corrected for before building a low frequency impedance model for seismic inversion, impedance results will be biased around deviated well trajectories. Since this is quite a subtle effect, the unwary interpreter might assign geological meaning to inversion artefacts. In this paper we demonstrate a simple but effective method for correcting deviated well AI logs for anisotropic effects. We know that others have performed similar but more sophisticated corrections (e.g. Vernik 2001; Vernik and Fisher 2001), but are not aware of this pitfall with deviated wells being widely acknowledged. Further, we consider the implications of the revealed anisotropy for single and simultaneous angle-stack (elastic impedance (EI)) inversions and propose workflows for compensating for anisotropy. Seismic impedance inversion has become an integral part of the reservoir characterisation workflow, since interpretation and quantification of reservoir rock properties is made using AI layer data rather than seismic amplitudes which relate to AI contrasts (van Riel, 2000). Recently seismic AVO data have also been inverted, bringing additional constraints on reservoir properties, since knowledge of the elastic rock properties can improve lithological and/or fluid reservoir characterisation over the use of AI alone. Two main families of inversion methods have developed, which we will classify as deterministic and stochastic. In general, deterministic methods search for a single global optimum, with an objective function being the mismatch between seismic and synthetic data. By their nature, the resulting AI volume will have a high frequency spectrum determined and limited by the seismic data, and a low frequency spectrum derived from the well data. By contrast, the high frequency limit of AIs generated by stochastic methods is raised beyond the seismic spectrum using the high frequency content of well logs coupled with geostatistics. These AI results are therefore non-unique, making them suitable for statistical uncertainty analysis on many high-resolution models. Both deterministically- and stochastically-generated AIs are subsequently converted to models of reservoir properties (porosity, Vshale etc) via upscaled petro-elastic relationships. Final results include 3D models of reservoir properties and associated uncertainties that reflect uncertainties on seismic inversion results and petro-elastic relationships. To date, there has been relatively little consideration of the impact of anisotropy on impedance inversion, despite the increasingly widespread acceptance that sedimentary rocks, and in particular shales, are often quite strongly anisotropic (e.g., Thomsen, 1986). This is presumably due to the fact that until now most inversion work has been focused on zero/near-offset cubes with (near-) vertical wells in relatively calm structural environments, where the impact of anisotropy can largely be factored out in the wavelet calibration step. However, as the community begins to consider AVO and the inversion of far-offset substacks (e.g., Vernik 2001), it will become increasingly important to estimate and account for anisotropy at various stages of the processing. In this paper we first consider the discrepancies between real AI logs from adjacent vertical and deviated wells. We then describe a correction to the deviated logs assuming an anisotropic model dependent on the proportion of shale, and show the effect of this correction on the inversion results. Finally, we consider the implications of anisotropy for EI and simultaneous AVO inversion. Even though we have used the geostatistical inversion method (Haas and Dubrule 1994; Dubrule et al. 1998), all impedance inversion techniques benefit from accounting for anisotropy revealed by deviated well logs.
- Published
- 2003
43. Reconstructing salt geometry using 3D CSEM data
- Author
-
Martin Panzner, Humberto Salazar Soto, and Luis Alberto Sanchez Perez
- Subjects
Geophysics ,High resistivity ,Electrical resistivity and conductivity ,Inversion (meteorology) ,Geometry ,Seismic interpretation ,010502 geochemistry & geophysics ,01 natural sciences ,Model building ,Synthetic data ,Geology ,0105 earth and related environmental sciences ,Controlled source - Abstract
In this paper we demonstrate the imaging capabilities of a newly developed 3D Gauss-Newton inversion algorithm for marine controlled source electromagnetic (CSEM) data by inverting synthetic data generated from a known salt resistivity model. We show that the high resistivity contrast between salt and background sediments can be utilized to reconstruct reliable images of the salt structure without the use of any a-priori information which could bias the outcome. Further, we re-invert a CSEM data set acquired in 2012 in the Salina basin in the Gulf of Mexico, using the same 3D Gauss-Newton inversion algorithm. The resulting resistivity model is compared to the initial salt interpretation based on seismic data. The top salt boundary in the inverted resistivity model correlates well with the initial interpretation. However, the base salt geometry, which is often difficult to map with seismic data alone, is imaged very differently. The CSEM inversion result is robust and independent of other geophysical data and therefore very valuable in a salt imaging workflow to support seismic interpretation and velocity model building.
- Published
- 2020
44. GeoDRIVE - a high performance computing flexible platform for seismic applications
- Author
-
Ghada Sindi, V. Etienne, Hussain Salim, Suha N. Kayum, Thierry Tonellot, Ali Momin, and Maxim Dmitriev
- Subjects
Software framework ,Flexibility (engineering) ,Modularity (networks) ,Geophysics ,Distributed computing ,010502 geochemistry & geophysics ,computer.software_genre ,Performance computing ,01 natural sciences ,computer ,Exascale computing ,Geology ,0105 earth and related environmental sciences - Abstract
GeoDRIVE, a high performance computing (HPC) software framework tailored to massive seismic applications and super-computers is presented. The paper discusses the flexibility and modularity of the application along with optimized HPC features. GeoDRIVE’s versatile design, associated to exascale computing capabilities, unlocks new classes of applications that significantly improve geoscientists’ abilities to understand, locate and characterize challenging targets in complex settings. As a result, uncertainties in subsurface models are reduced both quantitatively and qualitatively, along with reduced drilling risks and improved prospect generation.
- Published
- 2020
45. An automated quantitative multi-stage approach to invert velocity models for microseismic event locations
- Author
-
Vlad Shumila, Steve Falls, Dan Hook, Mike Preiksaitis, Fernando Castellanos, Ryan Nader, and Doug Angus
- Subjects
Data processing ,Microseism ,Real-time computing ,Particle swarm optimization ,Inversion (meteorology) ,Service provider ,010502 geochemistry & geophysics ,01 natural sciences ,Geophysics ,Hydraulic fracturing ,Workflow ,Data quality ,Geology ,0105 earth and related environmental sciences - Abstract
Complexity in hydraulic fracturing programmes has motivated microseismic service providers to innovate and propose creative methods to monitor drilling, completion and field development. Although microseismic analysis and interpretation have moved beyond the ‘dots-in-a-box’ solution, velocity model (VM) calibration using inversion plays a critical role in the initial phase of accurate microseismic event (event) locations to ensure the accuracy of subsequent higher-order microseismic attributes given data quality and monitoring geometry. Knowing that business decisions are, at times, required in real time, it is imperative to provide confident event locations efficiently through the construction of well-constrained VMs based on quantitative and objective methodologies. The most time-consuming aspects of microseismic data processing are optimal VM construction and inversion. In this paper, we demonstrate improved efficiency in microseismic data processing by developing and implementing an automated approach to perforation shot (perf) detection and VM inversion using Particle Swarm Optimization (PSO). These primary tasks (perf detection and VM inversion) are critical in the event location workflow and can benefit significantly from increased efficiency. Although more advanced Greens functions can provide more accurate solutions to the source location problem (e.g., Angus et al., 2014), we focus on ray-based approaches due to their high computational efficiency, especially for anisotropic media and hydraulic fracture monitoring where large volumes of microseismic data (commonly in excess of 100,000 events) must be processed.
- Published
- 2019
46. Low-powered autonomous underwater vehicles for large-scale ocean-bottom acquisition
- Author
-
Harry Debens, Fabio Mancini, and Ben Hollings
- Subjects
Regional geology ,Engineering geology ,Inversion (meteorology) ,Propulsion ,010502 geochemistry & geophysics ,01 natural sciences ,Geobiology ,Geophysics ,Software deployment ,Underwater ,Geology ,0105 earth and related environmental sciences ,Environmental geology ,Marine engineering - Abstract
In this paper we propose the use of autonomous underwater vehicles to enable faster and cheaper ocean bottom seismic acquisition. Our effort is focused on buoyancy-driven vehicles as this method of propulsion is extremely low-powered, enabling the units to have very long endurance and, therefore, making them suited to seismic acquisition on a large scale. We show that, for development-style surveys, these units can potentially double the acquisition efficiency. Furthermore, their ease of deployment makes them suited to acquisitions purposely designed for velocity estimation via full-waveform inversion, either on their own or in hybrid configuration with a streamer vessel.
- Published
- 2019
47. Random and systematic navigation errors: how do they affect seismic data quality?
- Author
-
Dennis Fryar, David J. Monk, and Josef Paffenholz
- Subjects
Geophysics ,Offset (computer science) ,Synthetic seismogram ,Image quality ,Stack trace ,Navigation system ,Ray tracing (graphics) ,Geodesy ,Rotation (mathematics) ,Geology ,Standard deviation ,Remote sensing ,Hyperbola - Abstract
A quantitative analysis has been performed to assess the effect of navigation errors on marine Seismic data. Two different types of errors were considered The first is a systematic error, that of a rotation of the streamer coordinates which could be caused for example by incorrect magnetic declination The second type models the random errors in the receiver positions due to the limited accuracy of the navigation network. The analysis is performed as a function of streamer feather angle, structural dip, and acquisition parameters. The effects on the seismic data are reported in terms of stack degradation and difference in the apparent NMO velocity. To assess the effect of a rotational error on the final image, we process a synthetic seismogram consisting of a dipping event and a diffractor through DMO and migration. Significant stack degradation as a consequence of a systematic rotational error is found only for lines shot in alternating directions in the presence of notable crossline dip. In all other cases the error is either absorbed by the NMO velocity or inconsequential because of small crossline dip. Stack degradation caused by random position errors are weakly dependent on the crossline dip and can be minimized by collecting the lines with the boat driving in the down dip direction. The changes in the resolution and position of the final image depend on the velocity which is used. For an inline of a survey shot along strike the use of true material velocity minimizes the impact of the rotational error while use of the apparent NMO velocity leads to a significant stack degradation. The misposition is about half a trace laterally and 5 and 10 ms in time for the true velocity and NMO velocity case, respectively. Other cases will be discussed. INTRODUCTION While the effects of streamer feather on binning and stacking of seismic data have been studied in the past (eg. Levin 1983, 1984), a quantitative analysis of the effects of an error in the measurement of the cable feather has not been made. Such a systematic error could be caused, for example, by incorrect magnetic declination and would result in a apparent rotation of the cable. More widespread are random errors in the receiver positions caused by the limited accuracy of the navigation measurements. Houston (1991) estimated the maximum uncertainty in cable receiver positions for a state of the art navigation system to be about 6 m. The high cost and reliability problems introduced by redundant navigat ion networks cal l for an investigation of the tradeoffs between operational cost and quality of the final seismic section. Cost pressures have led to a heightened interest to establish a link between the efforts to reduce the uncertainty in the receiver positions and increased image quality. Ursin-Helm et. al. (1992) studied the effect of infill on the quality of the final image and concluded that in their particular case an infill of 30 % Causes only minor improvements of the section quality. They also studied the effects of less accurate navigation by visual inspection of the final seismic data. In this paper we offer a quantitative analysis of how seismic data is affected by statistical or systematic rotational errors in the position of a marine seismic streamer. PROCEDURE Ray tracing is used to assess the effects of random errors and a particular systematic error, that of an apparent rotation of the cable. Synthetic CMP gathers are collected over a model consisting of one dipping event for different structural dips, bin sizes, and feather angles of a straight single cable. The least squares NMO velocity and the stack response are calculated for data collected with a particular rotation error and compared to the error free case. Two different shooting patterns are investigated. In the first case, all lines are shot in the same direction, while the second case consists of l ines shot in alternating directions. If the stack trace results from the summation of traces which were collected from different boat passes (with feather), then energy from an event in the subsurface will deviate from a perfect hyperbola (Levin 1984). The need for correction to an hyperbola has been examined in relation to velocity modelling and DMO by Meinardus and McMahon (1981). In this paper since we assume that the navigation errors are not known (and therefore not corrected) we do not apply the location correction required to correct for this effect, but rather analyze the results in terms of change to the velocity that is determined. A rotational error of up to +/2 deg is considered. Random errors in the receiver positions are implemented as normal distributions around the true positions which are assumed to fall on a straight line. The standard deviations increase linearly with the offset up to 25 m for the far offset (3000 m). A second set of experiments is used to assess the impact of a rotational navigation error on the resolution and position accuracy of the final migrated image. The synthetic constant velocity 3D model consists of a dipping reflector and a diffractor suspended 200 m above the plane. lnlines and crosslines are processed through 2D DMO and 2D migration, Final sections of error free data and data with rotation error are compared.
- Published
- 1994
48. Net reservoir discrimination through multi-attribute analysis at single sample scale
- Author
-
Jonathan Leal, Reinaldo Viloria, Rafael Jerónimo, Fabian Rada, and Rocky Roden
- Subjects
Artificial neural network ,business.industry ,Well logging ,Petrophysics ,Pattern recognition ,Extension (predicate logic) ,Seismic analysis ,Set (abstract data type) ,Geophysics ,Facies ,Artificial intelligence ,Scale (map) ,business ,Geology - Abstract
Self-Organizing Map (SOM) is an unsupervised neural network — a form of machine learning — that has been used in multi-attribute seismic analysis to extract more information from the seismic response than would be practical using only single attributes. The most common use is in automated facies mapping. It is expected that every neuron or group of neurons can be associated with a single depositional environment, the reservoir’s lateral and vertical extension, porosity changes or fluid content (Marroquin et al., 2009). Of course, the SOM results must be calibrated with available well logs. In this paper, the authors generated petrophysical labels to apply statistical validation techniques between well logs and SOM results. Based on the application of PCA to a larger set of attributes, a smaller, distilled set of attributes were classified using the SOM process to identify lithological changes in the reservoir (Roden et al., 2015).
- Published
- 2019
49. Transfer learning and Auto-ML: A geoscience perspective
- Author
-
Joshua Uwaifo and Ehsan Zabihi Naeini
- Subjects
Interpretation (logic) ,business.industry ,Earth science ,Deep learning ,Volume (computing) ,Area of interest ,Task (project management) ,Set (abstract data type) ,Geophysics ,Perspective (geometry) ,Artificial intelligence ,business ,Transfer of learning ,Geology - Abstract
Deep learning continues to receive increasing attention from researchers and has been successfully applied to many domains. This paper further extends the work from Zabihi Naeini and Prindle (2018) by adopting and examining two classes of Machine Learning techniques and their applications in geoscience with a pragmatic view. These are Transfer Learning and Automated Machine Learning or Auto-ML (Feurer and Klein, 2015). Although machine learning (ML) is known to be most efficient and accurate when trained on a large volume of data, there are cases in practice where ML methods are also implemented with limited available data. In such cases ML algorithms are less efficient in generalising to new data and it is where Transfer Learning can add value. This is shown in an automatic petrophysical interpretation task where Transfer Learning is compared with training from scratch given a new geological area of interest, i.e., a set of wells in a different area. We show the efficiency of Transfer Learning in obtaining a model that generalizes successfully for the new wells investigated.
- Published
- 2019
50. Giving the legacy seismic data the attention they deserve
- Author
-
Raymond Durrheim, Musa Manzi, and Alireza Malehmir
- Subjects
Regional geology ,Engineering geology ,010502 geochemistry & geophysics ,Geologic map ,01 natural sciences ,Tectonics ,Mineral exploration ,Geophysics ,Mining engineering ,Economic geology ,Palaeogeography ,Geology ,0105 earth and related environmental sciences ,Environmental geology - Abstract
Key minerals may soon be in short supply as shallow mineral deposits are mined-out; therefore exploration for economically feasible deep-seated deposits to sustain a long-term global growth is a great challenge. New deposits are likely to be found using reflection seismic surveys in combination with drilling, field geological mapping and other geophysical methods. Seismic methods have already have contributed significantly to the discovery of some of the world’s major mineral deposits (Milkereit et al., 1996; Pretorius et al., 2000; Trickett et al., 2004; Malehmir and Bellefleur, 2009; Malehmir et al., 2012). However, use of the method is not widespread because it is deemed to be expensive. Although improvements in computing capabilities have led to cost reductions, the costs are still beyond exploration budgets of many companies. Thus, mining companies have had little financial ability to acquire new reflection seismic data, and very little governmental support has been available to acquire research seismic surveys for mineral exploration. Over the last few years, there has been a proliferation of seismic solutions that employ various combinations of equip-ment, acquisition, and processing techniques, which can be applied in hard rock situations to improve the imaging resolution (Denis et al., 2013). The best acquisition solutions to date have come from the deployment of high-density receiver and source arrays which the extension of the seismic bandwidth to six octaves using broadband sources (Duval, 2012). Another area of seismic research has focused on surface seismic acquisition using three-component (3C) microelectro-mechanical (MEMS-based) seismic landstreamers (Brodic et al., 2015), coupled with wireless seismic recorders, and surface-tunnel-seismic surveys (Brodic et al., 2017). However, numerous difficulties have been encountered, even with these innovative acquisition seismic approaches. Seismic surveys acquired in the mining regions suffer from noise produced by the drilling, blasting and transport of rock and the crushing of ore. Furthermore, in some mining regions the acquisition of new data is not permitted due to new environmental regulations. In such a fast evolving seismic technological era, legacy reflection seismic data are often regarded by mining companies and geoscientists as inferior compared with the newly acquired data. This paper demonstrates that if the legacy data are properly retrieved, reprocessed, and interpreted using today’s standard techniques, they can be of significant value, particularly in the mining regions where no other data are available or the acqui-sition of new data is difficult and expensive. The development of multitudes of processing algorithms and seismic attributes, in particular, make it worthwhile to reprocess and interpret legacy data to enhance the detection of steeply dipping structures and geological features below the conventional seismic resolution limits (i.e., a quarter of the dominant wavelength), which was not possible with the tools that were available when the data were originally acquired and processed. The new information obtained from the legacy data may benefit future mine planning operations by discovering new ore deposits, providing a better estimation of the resources and information that will help to site and sink future shafts. Thus, any future mineral exploration project could also take the geological information obtained from the reprocessed and interpreted legacy seismic data into account when planning new advanced seismic surveys (Manzi et al., 2018). The latest seismic algorithms are particularly interesting to South Africa’s deep mining industry because South Africa has the world’s largest hard rock seismic database, which could benefit from new processing techniques and attributes analyses. These techniques could be applied to legacy seismic data to identify areas of interest, improve structural resolution and to locate deeper ore deposits. Seismic attributes, in particular, could be used to identify any subtle geological structures crosscutting these deposits ahead of the mining face that could affect mine planning and safety.
- Published
- 2019
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.