Generally, when velocity filters are applied to prestack seismic data, they can suppress long‐period multiples, but they can also deteriorate the primary amplitudes. Since the apparent velocity differences between primaries and multiples can be negligible within the near‐offset regions of data gathers, a velocity filter will be ineffective in these regions, tending to remove or distort primary signal. The normal solution to this problem is to mute or remove near‐offset data, but this approach is detrimental to low‐fold data. In this paper, we derive a filter that can be used to address the problem by responding to average apparent velocity differences rather than to instantaneous apparent velocity differences between primaries and multiples. The filter, called a local coherence filter, is essentially a finite‐difference operator whose step size is equivalent to a spatial prediction distance. Within the local coherence filter’s small application window, an NMO-corrected multiple is predictable while an overcorrected primary is not predictable. The step size of its difference operator gives the filter the ability to discriminate between primaries and long‐period multiples in regions of a data gather where velocity filters fail. This paper derives the local coherence filter and compares it to the f-k filter in both model and real‐data applications. Results demonstrate that the local coherence filter is more effective in suppressing long‐period multiples without distorting the primary reflections.
The least-squares method is the most popular method for fitting a polynomial curve to data. It is based on minimizing the total squared error between a polynomial model and the data. In this paper we develop a different approach that exploits the autocorrelation function. In particular, we use the nonzero lag autocorrelation terms to produce a system of quadratic equations that can be solved together with a linear equation derived from summing the data. There is a maximum of solutions when the polynomial is of degree . For the linear case, there are generally two solutions. Each solution is consistent with a total error of zero. Either visual examination or measurement of the total squared error is required to determine which solution fits the data. A comparison between the comparable autocorrelation term solution and linear least squares shows negligible difference.
Predictive deconvolution is a very effective multiple attenuator for zero‐offset data and for nonzero offset data acquired in water depths less than 100 m. However, predictive deconvolution’s efficacy degrades rapidly with offset, a degradation that correlates highly with nonstationarity of the primary‐to‐multiple traveltime separation. For model data, predictive deconvolution’s performance degrades by a factor of two when the multiple period changes by only 5 ms (20% of the seismic wavelet’s dominant period) within the deconvolution gate. For two‐thirds of the model‐data offsets, the change in primary‐multiple separation on each trace exceeds 40% of the dominant period, and deconvolution is completely ineffective at removing multiples. We develop a stationarity transform, which is a moveout operation or a time‐variable time shift that can be applied separately to each trace. The stationarity transform stabilizes the traveltime separation between primary and first‐order multiple, based upon the assumptions of hyperbolic moveout, layer‐cake geology, and Dix multiple velocities. After applying the stationarity transform to a model data set consisting of primaries and first‐order multiples only, predictive deconvolution suppresses multiples at the theoretical suppression limit for all offsets. Furthermore, predictive deconvolution is equally effective for low‐frequency and high‐frequency wavelets. When the data set is made more realistic by including higher‐order multiples, predictive deconvolution’s ability to suppress multiple reflections degrades only slightly with offset. Stationarity transformation also improves predictive deconvolution’s multiple suppression on a real data set. Because the real data set is from a region where the water depth is shallower than 100 m, predictive deconvolution suppresses multiples effectively on the near‐ and middle‐offset traces, even without stationarity transformation. However, on the farthest offsets, stationarity transformation improves the efficacy of predictive deconvolution significantly.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.