In seismic imaging the aim is to obtain an image of the subsurface using reflection data. The reflection data are generated using sound waves and the sources and receivers are placed at the surface. The target zone, for example an oil or gas reservoir, lies relatively deep in the subsurface below several layers. The area above the target zone is called the overburden. This overburden will have an imprint on the image. Wavefield redatuming is an approach that removes the imprint of the overburden on the image by creating so-called virtual sources and receivers above the target zone. The virtual sources are obtained by determining the impulse response, or Green's function, in the subsurface. The impulse response is obtained by deconvolving all upand downgoing wavefields at the desired location. In this paper, we pose this deconvolution problem as a constrained least-squares problem. We describe the constraints that are involved in the deconvolution and show that they are associated with orthogonal projection operators. We show different optimization strategies to solve the constrained least-squares problem and provide an explicit relation between them, showing that they are in a sense equivalent. We show that the constrained least-squares problem remains ill-posed and that additional regularization has to be provided. We show that Tikhonov regularization leads to improved resolution and a stable optimization procedure, but that we cannot estimate the correct regularization parameter using standard parameter selection methods. We also show that the constrained least-squares can be posed in such a way that additional nonlinear regularization is possible.
Multidimensional deconvolution constitutes an essential operation in a variety of geophysical scenarios at different scales ranging from reservoir to crustal, as it appears in applications such as surface multiple elimination, target-oriented redatuming, and interferometric body-wave retrieval just to name a few. Depending on the use case, active, microseismic, or teleseismic signals are used to reconstruct the broadband response that would have been recorded between two observation points as if one were a virtual source. Reconstructing such a response relies on the the solution of an ill-conditioned linear inverse problem sensitive to noise and artifacts due to incomplete acquisition, limited sources, and band-limited data. Typically, this inversion is performed in the Fourier domain where the inverse problem is solved per frequency via direct or iterative solvers. While this inversion is in theory meant to remove spurious events from cross-correlation gathers and to correct amplitudes, difficulties arise in the estimation of optimal regularization parameters, which are worsened by the fact they must be estimated at each frequency independently. Here we show the benefits of formulating the problem in the time domain and introduce a number of physical constraints that naturally drive the inversion towards a reduced set of stable, meaningful solutions. By exploiting reciprocity, time causality, and frequency-wavenumber locality a set of preconditioners are included at minimal additional cost as a way to alleviate the dependency on an optimal damping parameter to stabilize the inversion. With an interferometric redatuming example, we demonstrate how our time domain implementation successfully reconstructs the overburden-free reflection response beneath a complex salt body from noise-contaminated up- and down-going transmission responses at the target level.
To limit the time, cost, and environmental impact associated with the acquisition of seismic data, in recent decades considerable effort has been put into so-called simultaneous shooting acquisitions, where seismic sources are fired at short time intervals between each other. As a consequence, waves originating from consecutive shots are entangled within the seismic recordings, yielding so-called blended data. For processing and imaging purposes, the data generated by each individual shot must be retrieved. This process, called deblending, is achieved by solving an inverse problem which is heavily underdetermined. Conventional approaches rely on transformations that render the blending noise into burst-like noise, whilst preserving the signal of interest. Compressed sensing type regularization is then applied, where sparsity in some domain is assumed for the signal of interest. The domain of choice depends on the geometry of the acquisition and the properties of seismic data within the chosen domain. In this work, we introduce a new concept that consists of embedding a self-supervised denoising network into the Plug-and-Play (PnP) framework. A novel network is introduced whose design extends the blind-spot network architecture of [28] for partially coherent noise (i.e., correlated in time). The network is then trained directly on the noisy input data at each step of the PnP algorithm. By leveraging both the underlying physics of the problem and the great denoising capabilities of our blind-spot network, the proposed algorithm is shown to outperform an industry-standard method whilst being comparable in terms of computational cost. Moreover, being independent on the acquisition geometry, our method can be easily applied to both marine and land data without any significant modification.Preprint. Under review.
In this work we address regularization parameter estimation for ill-posed linear inverse problems with an 2 penalty. Regularization parameter selection is of utmost importance for all of inverse problems and estimating it generally relies on the experience of the practitioner. For regularization with an 2 penalty there exist a lot of parameter selection methods that exploit the fact that the solution and the residual can be written in explicit form. Parameter selection methods are functionals that depend on the regularization parameter where the minimizer is the desired regularization parameter that should lead to a good solution. Evaluation of these parameter selection methods still requires solving the inverse problem multiple times. Efficient evaluation of the parameter selection methods can be done through model order reduction. Two popular model order reduction techniques are Lanczos based methods (a Krylov subspace method) and the Randomized Singular Value Decomposition (RSVD). In this work we compare the two approaches. We derive error bounds for the parameter selection methods using the RSVD. We compare the performance of the Lanczos process versus the performance of RSVD for efficient parameter selection. The RSVD algorithm we use is based on the Adaptive Randomized Range Finder algorithm which allows for easy determination of the dimension of the reduced order model. Some parameter selection also require the evaluation of the trace of a large matrix. We compare the use of a randomized trace estimator versus the use of the Ritz values from the Lanczos process. The examples we use for our experiments are two model problems from geosciences.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.