Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
In recent years, various convolutional neural network architectures have been proposed for first break picking. In this paper, we compare the standard auto-encoder and U-net architectures as well as versions enhanced with ResNet style skip connections. The U-net appears to have become the standard network for segmentation, judging from the number of published articles. Still, there is some variety in neural network architectural choices. In this paper, we assess the impact of neural network depth, width and input data size, as well as some small modifications for deep networks offered by the ResNet. In general, results improve as the networks get deeper, but with diminishing returns. The more complex the data, the more benefit the deeper networks bring. We use complete shot gathers, albeit rescaled for efficiency, to train the neural networks. For shot gathers with a simple piecewise linear moveout, this approach yields results with good accuracy when gathers are resampled to 128 × 128 samples. For shot gathers with more complex first break moveout, using our approach it is advised to stay close to the original dimension of each gather for best accuracy, at the expense of increased training times. A good trade-off between network depth, image size and training times is to use a nine-stage U-net with 256 sample images. Despite the advantages in other applications, the basic U-net outperforms a U-net with ResNet features. We show that changing the input data dimensions for trained networks does not work, despite the fact the fully convolutional networks are independent of image size. The U-net based first break picking is not sensitive to picking errors, as in many cases the neural network predictions are better than the training data where the training data have random mispicks. This suggests a practical application; namely, to train or re-train a pre-trained network on a single data set after conventional first break picking with the objective of improving conventionally picked first breaks.
In recent years, various convolutional neural network architectures have been proposed for first break picking. In this paper, we compare the standard auto-encoder and U-net architectures as well as versions enhanced with ResNet style skip connections. The U-net appears to have become the standard network for segmentation, judging from the number of published articles. Still, there is some variety in neural network architectural choices. In this paper, we assess the impact of neural network depth, width and input data size, as well as some small modifications for deep networks offered by the ResNet. In general, results improve as the networks get deeper, but with diminishing returns. The more complex the data, the more benefit the deeper networks bring. We use complete shot gathers, albeit rescaled for efficiency, to train the neural networks. For shot gathers with a simple piecewise linear moveout, this approach yields results with good accuracy when gathers are resampled to 128 × 128 samples. For shot gathers with more complex first break moveout, using our approach it is advised to stay close to the original dimension of each gather for best accuracy, at the expense of increased training times. A good trade-off between network depth, image size and training times is to use a nine-stage U-net with 256 sample images. Despite the advantages in other applications, the basic U-net outperforms a U-net with ResNet features. We show that changing the input data dimensions for trained networks does not work, despite the fact the fully convolutional networks are independent of image size. The U-net based first break picking is not sensitive to picking errors, as in many cases the neural network predictions are better than the training data where the training data have random mispicks. This suggests a practical application; namely, to train or re-train a pre-trained network on a single data set after conventional first break picking with the objective of improving conventionally picked first breaks.
Low-frequency seismic data are crucial for convergence of full-waveform inversion (FWI) to reliable subsurface properties. However, it is challenging to acquire field data with an appropriate signal-to-noise ratio in the low-frequency part of the spectrum. We have extrapolated low-frequency data from the respective higher frequency components of the seismic wavefield by using deep learning. Through wavenumber analysis, we find that extrapolation per shot gather has broader applicability than per-trace extrapolation. We numerically simulate marine seismic surveys for random subsurface models and train a deep convolutional neural network to derive a mapping between high and low frequencies. The trained network is then tested on sections from the BP and SEAM Phase I benchmark models. Our results indicate that we are able to recover 0.25 Hz data from the 2 to 4.5 Hz frequencies. We also determine that the extrapolated data are accurate enough for FWI application.
Machine Learning (ML) applications in seismic exploration are growing faster than applications in other industry fields, mainly due to the large amount of acquired data for the exploration industry. The ML algorithms are constantly being implemented to almost all the steps involved in seismic processing and interpretation workflow, mainly for automation, processing time reduction, efficiency and in some cases for improving the results. We carried out a literature-based analysis of existing ML-based seismic processing and interpretation published in SEG and EAGE literature repositories and derived a detailed overview of the main ML thrusts in different seismic applications. For each publication, we extracted various metadata about ML implementations and performances. The data indicate that current ML implementations in seismic exploration are focused on individual tasks rather than a disruptive change in processing and interpretation workflows. The metadata shows that the main targets of ML applications for seismic processing are denoising, velocity model building and first break picking, whereas for seismic interpretation, they are fault detection, lithofacies classification and geo-body identification. Through the metadata available in publications, we obtained indices related to computational power efficiency, data preparation simplicity, real data test rate of the ML model, diversity of ML methods, etc. and we used them to approximate the level of efficiency, effectivity and applicability of the current ML-based seismic processing and interpretation tasks. The indices of ML-based processing tasks show that current ML-based denoising and frequency extrapolation have higher efficiency, whereas ML-based QC is more effective and applicable compared to other processing tasks. Among the interpretation tasks, ML-based impedance inversion shows high efficiency, whereas high effectivity is depicted for fault detection. ML-based Lithofacies classification, stratigraphic sequence identification and petro/rock properties inversion exhibit high applicability among other interpretation tasks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.