Reconstructing the details of subsurface structures deep beneath complex overburden structures, such as sub-salt, remains a challenge for seismic imaging. Over the past years, the Marchenko redatuming approach has proven to reliably retrieve full-wavefield information in the presence of complex overburden effects. When used for redatuming, current practical Marchenko schemes cannot make use of a priori subsurface models with sharp contrasts because of their requirements regarding initial focusing functions, which for sufficiently complex media can result in redatumed fields with significant waveform inaccuracies. Using a scattering framework, we present an alternative form of the Marchenko representation that aims at retrieving only the unknown perturbations to both focusing functions and redatumed fields. From this framework, we propose a two-step practical focusing-based redatuming scheme that first solves an inverse problem for the background focusing functions, which are then used to estimate the perturbations to focusing functions and redatumed fields. In our scheme, initial focusing functions are significantly different from previous approaches since they contain complex waveforms encoding the full transmission response of the a priori model. Our goal is the handling of not only highly complex media but also realistic data - band-limited, unevenly sampled, free-surface-multiple contaminated data. To that end, we combine the versatility of Rayleigh-Marchenko redatuming with the proposed scattering-based scheme allowing an extended version of the method able to handle single-sided band-limited multicomponent data. This Scattering-Rayleigh-Marchenko strategy accurately retrieves wavefields while requiring minimum pre-processing of the data. In support of the new methods, we present a comprehensive set of numerical tests using a complex 2D subsalt model. Our numerical results show that the scattering approaches retrieve accurate redatumed fields that appropriately account for the complexity of the a priori model. We show that the improvements in wavefield retrieval translate into measurable improvements in our subsalt images.
Multidimensional deconvolution constitutes an essential operation in a variety of geophysical scenarios at different scales ranging from reservoir to crustal, as it appears in applications such as surface multiple elimination, target-oriented redatuming, and interferometric body-wave retrieval just to name a few. Depending on the use case, active, microseismic, or teleseismic signals are used to reconstruct the broadband response that would have been recorded between two observation points as if one were a virtual source. Reconstructing such a response relies on the the solution of an ill-conditioned linear inverse problem sensitive to noise and artifacts due to incomplete acquisition, limited sources, and band-limited data. Typically, this inversion is performed in the Fourier domain where the inverse problem is solved per frequency via direct or iterative solvers. While this inversion is in theory meant to remove spurious events from cross-correlation gathers and to correct amplitudes, difficulties arise in the estimation of optimal regularization parameters, which are worsened by the fact they must be estimated at each frequency independently. Here we show the benefits of formulating the problem in the time domain and introduce a number of physical constraints that naturally drive the inversion towards a reduced set of stable, meaningful solutions. By exploiting reciprocity, time causality, and frequency-wavenumber locality a set of preconditioners are included at minimal additional cost as a way to alleviate the dependency on an optimal damping parameter to stabilize the inversion. With an interferometric redatuming example, we demonstrate how our time domain implementation successfully reconstructs the overburden-free reflection response beneath a complex salt body from noise-contaminated up- and down-going transmission responses at the target level.
Efficient computer programming is becoming a central requirement in quantitative Earth science education. This applies not only to the early career stage but—due to the rapid evolution of programming paradigms—also throughout professional life. At universities, workshops, or any software training events, efficient practical programming exercises are hampered by the heterogeneity of hardware and software setups of participants. Jupyter notebooks offer an attractive solution by providing a platform‐independent concept and allowing the combination of text‐editing, program execution, and plotting. Here, we document a growing library with dozens of Jupyter notebooks for training in seismology. The library is made “live” through a server that allows accessing and running the notebooks in the browser on any system (PC, laptop, tablet, smartphone), provided there is internet access. The library seismo‐live contains notebooks on many aspects of seismology, including data processing, computational seismology, and earthquake physics, as well as reproducible papers and graphics. It is a community effort and is intended to benefit from continuous interaction with seismologists around the world.
A variety of wave-equation-based seismic processing algorithms rely on the repeated application of the Multi-Dimensional Convolution (MDC) operator. For large-scale 3D seismic surveys, this comes with severe computational challenges due to the sheer size of high-density, full-azimuth seismic datasets required by such algorithms. We present a three-fold solution that greatly alleviates the memory footprint and computational cost of 3D MDC by leveraging a combination of i) distance-aware matrix reordering, ii) Tile Low-Rank (TLR) matrix compression, and iii) computations in mixed floating-point precision. By applying our strategy to a 3D synthetic dataset, we show that the size of kernel matrices used in the Marchenko redatuming and Multi-Dimensional Deconvolution equations can be reduced by a factor of 34 and 6, respectively. We also introduce a TLR Matrix-Vector Multiplication (TLR-MVM) algorithm that, as a direct consequence of such compression capabilities, is consistently faster than its dense counterpart by a factor of 4.8 to 36.1 (depending on the selected hardware). As a result, the associated inverse problems can be solved at a fraction of cost in comparison to state-ofthe-art implementations that require a pass through the entire data at each MDC operation. This is achieved with minimal impact on the quality of the processing outcome.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.