Although modern recording capacity facilitates dense seismic acquisition, many, if not most, legacy 3D land surveys are spatially aliased with respect to ground roll. Irregular topography and weathering zones give rise to ground roll that has piecewise rather than continuous linear moveout (LMO). Dispersion often results in shingled events whose phase velocity cuts across the ground-roll noise cone. We have developed a workflow for the suppression of highly aliased broadband ground roll in which modern f-k x-k y filters failed. Our workflow began with lowpass filtering and windowing the data, 3D patch by 3D patch. We then applied LMO corrections using an average phase velocity of the ground roll and improved these moveout corrections through the use of local three-shot by three-receiver 3D velocity scans about each sample to account for lateral changes in velocity, thickness, and weathering zone topography. Using a Kuwahara algorithm, we chose the most coherent window within which we applied a structure-oriented Karhunen-Loève filter to model the coherent noise. Finally, we removed the LMO correction and subtracted the modeled ground roll from the original data. We applied our workflow to a legacy data volume consisting of four merged 3D surveys acquired in the 1990s. Application of modern seismic attributes showed improved mapping of faults and flexures. We also validated our workflow using a synthetic gather having the same geometry as our field data.
Recent developments in attribute analysis and machine learning have significantly enhanced interpretation workflows of 3D seismic surveys. Nevertheless, even in 2018, many sedimentary basins are only covered by grids of 2D seismic lines. These 2D surveys are suitable for regional feature mapping and often identify targets in areas not covered by 3D surveys. With continuing pressure to cut costs in the hydrocarbon industry, it is crucial to extract as much information as possible from these 2D surveys. Unfortunately, much if not most modern interpretation software packages are designed to work exclusively with 3D data. To determine if we can apply 3D volumetric interpretation workflows to grids of 2D seismic lines, we have applied data conditioning, attribute analysis, and a machine-learning technique called self-organizing maps to the 2D data acquired over the Exmouth Plateau, North Carnarvon Basin, Australia. We find that these workflows allow us to significantly improve image quality, interpret regional geologic features, identify local anomalies, and perform seismic facies analysis. However, these workflows are not without pitfalls. We need to be careful in choosing the order of filters in the data conditioning workflow and be aware of reflector misties at line intersections. Vector data, such as reflector convergence, need to be extracted and then mapped component-by-component before combining the results. We are also unable to perform attribute extraction along a surface or geobody extraction for 2D data in our commercial interpretation software package. To address this issue, we devise a point-by-point attribute extraction workaround to overcome the incompatibility between 3D interpretation workflow and 2D data.
Semblance and other coherence measures are routinely used in seismic processing, such as velocity spectra analysis, in seismic interpretation to estimate volumetric dip and to delineate geologic boundaries, and in poststack and prestack data conditioning such as edge-preserving structure-oriented filtering. Although interpreters readily understand the significance of outliers for such measures as seismic amplitude being described by a Gaussian (or normal) distribution, and root-mean-square amplitude by a log-normal distribution, the measurement significance of a given coherence of poststack seismic data is much more difficult to grasp. We have followed early work on the significance of events seen in semblance-based velocity spectra, and we used an [Formula: see text]-statistic to quantify the significance of coherence measures at each voxel. The accuracy and resolution of these measures depended on the bandwidth of the data, the signal-to-noise ratio (S/N), and the size of the spatial and temporal analysis windows used in their numerical estimation. In 3D interpretation, low coherence estimated not only the seismic noise but also the geologic signal, such as fault planes and channel edges. Therefore, we have estimated the S/N as the product of coherence and two alternative measures of randomness, the first being the disorder attribute and the second estimate based on eigenvalues of a window of coherence values. The disorder attribute is fast and easy to compute, whereas the eigenvalue calculation is computationally intensive and more accurate. We have demonstrated the value of this measure through application to two 3D surveys, in which we modulated coherence measures by our [Formula: see text]-statistic measure to show where discontinuities were significant and where they corresponded to more chaotic features.
In a machine learning workflow, data normalization is a crucial step that compensates for the large variation in data ranges and averages associated with different types of input measured with different units. However, most machine learning implementations do not provide data normalization beyond the z-score algorithm which subtracts the mean from the distribution and then scales the result by dividing by the standard deviation. Although z-score converts data with Gaussian behavior to have the same shape and size, many of our seismic attribute volumes exhibit log-normal, or even more complicated distributions. Because many machine learning applications are based on Gaussian statistics, we wish to evaluate the impact of more sophisticated data normalization techniques on the resulting classification. To do so, we provide an in-depth analysis of data normalization in machine-learning classifications by formulating and applying a logarithmic data transformation scheme to the unsupervised classifications (including PCA, ICA, SOM, and GTM) of a turbidite channel system in the Canterbury Basin, New Zealand, as well as implementing a per-class normalization scheme to the supervised probabilistic neural network (PNN) classification of salt in the Eugene Island mini-basin, Gulf of Mexico. Compared to the simple z-score normalization, a single logarithmic transformation applied to each input attribute significantly increases the spread of the resulting clusters (and corresponding color contrast), thereby enhancing subtle details in projection and unsupervised classification. However, this same uniform transformation produces less-confident results in supervised classification using probabilistic neural networks. We find that more accurate supervised classifications can be found by applying class-dependent normalization for each input attribute.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.