In this paper we propose novel methods for completion (from limited samples) and de-noising of multilinear (tensor) data and as an application consider 3-D and 4-D (color) video data completion and de-noising. We exploit the recently proposed tensor-Singular Value Decomposition (t-SVD) [11]. Based on t-SVD, the notion of multilinear rank and a related tensor nuclear norm was proposed in [11] to characterize informational and structural complexity of multilinear data. We first show that videos with linear camera motion can be represented more efficiently using t-SVD compared to the approaches based on vectorizing or flattening of the tensors. Since efficiency in representation implies efficiency in recovery, we outline a tensor nuclear norm penalized algorithm for video completion from missing entries. Application of the proposed algorithm for video recovery from missing entries is shown to yield a superior performance over existing methods. We also consider the problem of tensor robust Principal Component Analysis (PCA) for de-noising 3-D video data from sparse random corruptions. We show superior performance of our method compared to the matrix robust PCA adapted to this setting as proposed in [4].
We have developed a novel strategy for simultaneous interpolation and denoising of prestack seismic data. Most seismic surveys fail to cover all possible source-receiver combinations, leading to missing data especially in the midpoint-offset domain. This undersampling can complicate certain data processing steps such as amplitude-variationwith-offset analysis and migration. Data interpolation can mitigate the impact of missing traces. We considered the prestack data as a 5D multidimensional array or otherwise referred to as a 5D tensor. Using synthetic data sets, we first found that prestack data can be well approximated by a lowrank tensor under a recently proposed framework for tensor singular value decomposition (tSVD). Under this low-rank assumption, we proposed a complexity-penalized algorithm for the recovery of missing traces and data denoising. In this algorithm, the complexity regularization was controlled by tuning a single regularization parameter using a statistical test. We tested the performance of the proposed algorithm on synthetic and real data to show that missing data can be reliably recovered under heavy downsampling. In addition, we demonstrated that compressibility, i.e., approximation of the data by a low-rank tensor, of seismic data under tSVD depended on the velocity model complexity and shot and receiver spacing. We further found that compressibility correlated with the recovery of missing data because high compressibility implied good recovery and vice versa.
Seismic imaging is conventionally performed using noisy data and a presumably inexact velocity model. Uncertainties in the input parameters propagate directly into the final image and therefore into any quantity of interest, or qualitative interpretation, obtained from the image. We considered the problem of uncertainty quantification in velocity building and seismic imaging using Bayesian inference. Using a reduced velocity model, a fast field expansion method for simulating recorded wavefields, and the adaptive Metropolis-Hastings algorithm, we efficiently quantify velocity model uncertainty by generating multiple models consistent with low-frequency full-waveform data. A second application of Bayesian inversion to any seismic reflections present in the recorded data reconstructs the corresponding structures’ position along with its associated uncertainty. Our analysis complements rather than replaces traditional imaging because it allows us to assess the reliability of visible image features and to take that into account in subsequent interpretations.
SUMMARY Time-lapse seismic monitoring using full-wavefield methods aims to accurately and robustly image rock and fluid changes within a reservoir. These changes are typically small and localized. Quantifying the uncertainty related to these changes is crucial for decision making, but traditional methods that use pixel by pixel uncertainty quantification with large models are computationally infeasible. We exploit the structure of the time-lapse seismic problem for fast wavefield computations using a numerically exact local acoustic solver. This allows us to perform a Bayesian inversion using a Metropolis–Hastings algorithm to sample our posterior distribution. We address the well-known dimensionality problem in global optimization using an image compression technique. We run our numerical experiments using a single shot and a single frequency, however we show that various frequencies converge to different local minima. In addition, we test our framework for both uncorrelated and correlated noise, and we retrieve different histograms for each noise type. Through our numerical examples we show the importance of defining quantities of interest in order to setup an appropriate uncertainty quantification framework involving choosing the number of degrees of freedom and model parametrization that best approximate the problem. To our knowledge, there is no work in the literature studying the time-lapse problem using stochastic full-waveform inversion.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.