Dictionary learning (DL) methods are effective tools to automatically find a sparse representation of a data set. They train a set of basis vectors on the data to capture the morphology of the redundant signals. The basis vectors are called atoms, and the set is referred to as the dictionary. This dictionary can be used to represent the data in a sparse manner with a linear combination of a few of its atoms. In conventional DL, the atoms are unstructured and are only numerically defined over a grid that has the same sampling as the data. Consequently, the atoms are unknown away from this sampling grid, and a sparse representation of the data in the dictionary domain is not sufficient information to interpolate the data. To overcome this limitation, we have developed a DL method called parabolic DL, in which each learned atom is constrained to represent an elementary waveform that has a constant amplitude along a parabolic traveltime moveout. The parabolic structure is consistent with the physics inherent to the seismic wavefield and can be used to easily interpolate or extrapolate the atoms. Hence, we have developed a parabolic DL-based process to interpolate and regularize seismic data. Briefly, it consists of learning a parabolic dictionary from the data, finding a sparse representation of the data in the dictionary domain, interpolating the dictionary atoms over the desired grid, and, finally, taking the sparse representation of the data in the interpolated dictionary domain. We examine three characteristics of this method, i.e., the parabolic structure, the sparsity promotion, and the adaptation to the data, and we conclude that they strengthen robustness to noise and to aliasing and that they increase the accuracy of the interpolation. For both synthetic and field data sets, we have successful seismic wavefield reconstructions across the streamers for typical 3D acquisition geometries.
We have addressed the seismic data denoising problem, in which the noise is random and has an unknown spatiotemporally varying variance. In seismic data processing, random noise is often attenuated using transform-based methods. The success of these methods in denoising depends on the ability of the transform to efficiently describe the signal features in the data. Fixed transforms (e.g., wavelets, curvelets) do not adapt to the data and might fail to efficiently describe complex morphologies in the seismic data. Alternatively, dictionary learning methods adapt to the local morphology of the data and provide state-of-the-art denoising results. However, conventional denoising by dictionary learning requires a priori information on the noise variance, and it encounters difficulties when applied for denoising seismic data in which the noise variance is varying in space or time. We have developed a coherence-constrained dictionary learning (CDL) method for denoising that does not require any a priori information related to the signal or noise. To denoise a given window of a seismic section using CDL, overlapping small 2D patches are extracted and a dictionary of patch-sized signals is trained to learn the elementary features embedded in the seismic signal. For each patch, using the learned dictionary, a sparse optimization problem is solved, and a sparse approximation of the patch is computed to attenuate the random noise. Unlike conventional dictionary learning, the sparsity of the approximation is constrained based on coherence such that it does not need a priori noise variance or signal sparsity information and is still optimal to filter out Gaussian random noise. The denoising performance of the CDL method is validated using synthetic and field data examples, and it is compared with the K-SVD and FX-Decon denoising. We found that CDL gives better denoising results than K-SVD and FX-Decon for removing noise when the variance varies in space or time.
A B S T R A C TIt is well-known that experimental or numerical backpropagation of waves generated by a point-source/-scatterer will refocus on a diffraction-limited spot with a size not smaller than half the wavelength. More recently, however, super-resolution techniques have been introduced that apparently can overcome this fundamental physical limit. This paper provides a framework of understanding and analysing both diffraction-limited imaging as well as super resolution.The resolution analysis presented in the first part of this paper unifies the different ideas of backpropagation and resolution known from the literature and provides an improved platform to understand the cause of diffraction-limited imaging. It is demonstrated that the monochromatic resolution function consists of both causal and non-causal parts even for ideal acquisition geometries. This is caused by the inherent properties of backpropagation not including the evanescent field contributions. As a consequence, only a diffraction-limited focus can be obtained unless there are ideal acquisition surfaces and an infinite source-frequency band.In the literature various attempts have been made to obtain images resolved beyond the classical diffraction limit, e.g., super resolution. The main direction of research has been to exploit the evanescent field components. However, this approach is not practical in case of seismic imaging in general since the evanescent waves are so weakbecause of attenuation, they are masked by the noise. Alternatively, improvement of the image resolution of point like targets beyond the diffraction limit can apparently be obtained employing concepts adapted from conventional statistical multiple signal classification (MUSIC). The basis of this approach is the decomposition of the measurements into two orthogonal domains: signal and noise (nil) spaces. On comparison with Kirchhoff prestack migration this technique is showed to give superior results for monochromatic data. However, in case of random noise the super-resolution power breaks down when employing monochromatic data and a limited acquisition aperture. For such cases it also seems that when the source-receiver lay out is less correlated, the use of a frequency band may restore the super-resolution capability of the method.
We developed a higher resolution method for the estimation of the three travel-time parameters that are used in the 2D zero-offset, Common-Reflection-Surface stack method. The underlying principle in this method is to replace the coherency measure performed using semblance with that of MUSIC (multiple signal classification) pseudospectrum that utilizes theeigenstructureof the data covariance matrix. The performance of the two parameter estimation techniques (i.e., semblance and MUSIC) was investigated using both synthetic seismic diffraction and reflection data corrupted with white Gaussian noise, as well as a multioffset ground penetrating radar (GPR) field data set. The estimated parameters employing MUSIC were shown to be superior of those from semblance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.