Many production targets in greenfield exploration are found in salt provinces, which have highly complex structures as a result of salt formation over geologic time. Difficult geologic settings, steep dips, and other wave-propagation effects make reverse-time migration (RTM) the migration method of choice, rather than Kirchhoff migration or other (by definition approximate) one-way equation methods. Imaging of the subsurface using any depth-migration algorithm can be done successfully only when the quality of the prior velocity model is sufficient. The (velocity) model-building loop is an iterative procedure for improving the velocity model. This is done by obtaining certain measurements (residual moveout) on image gathers generated during the migration procedure; those measurements then are input into tomographic updating. Commonly RTM is applied around salt bodies, where building the velocity model fails essentially because tomography is ray-trace based. Our idea is to apply RTM directly inside the model-building loop but to do so without using the image gathers. Although the process is costly, we migrate the full frequency content of the data to create a high-quality stack. This enhances the interpretation of top and bottom salt significantly and enables us to include the resulting salt geometry in the velocity model properly. We demonstrate our idea on a 2D West Africa seismic line. After several model-building iterations, the result is a dramatically improved velocity model. With such a good model as input, the final RTM confirms the geometry of the salt bodies and basically the salt interpretation, and yields a compelling image of the subsurface.
As exploration and production move ever faster into ever more geologically complex areas, very accurate imaging of subsurface structure becomes critical to successful well positioning and reserve estimations. Currently, the only tool that can achieve this is prestack depth migration (PSDM).The continuing improvements of personal computers, which are now capable of high-performance computing due to the introduction of Linux-based clusters, have resulted in the development of PC-based processing systems that are widely accepted in the seismic industry. We foresee that this new source of ubiquitous, cheap compute power will be applied to depth imaging of increasingly larger data sets. This is an important development, but are there other ways to make good use of ample processing power?In this article, we suggest that new ideas can be applied at various stages of the prestack depth migration loop to improve overall quality and, more significant perhaps, reduce the need for human intervention to a bare minimum.We believe that automated velocity model building and updating could be the key to improving the depth migration process, because more iterations could be run in less time and, it is assumed, that increasing the number of iterations will result in better convergence of the model with the earth.The Sakhalin data example in this article incorporates several of these ideas in a commonly used prestack depth migration cycle. Figure 1, the PSDM cycle, indicates the areas where we have developed new ways of working. The initial velocity model is a grid representing the seismic velocities and is used to perform a prestack depth migration. The step requires that the migration operators and the prestack seismic data be fed into the Kirchhoff summation migration engine (Berkhout, 1984; Schleicher et al., 1993), which produces image gathers. The residual moveout on the common image gathers is used as a quality control measure of the velocity model. In order to use this information in an automated fashion, the gathers have to be scanned for the time dip and the residual moveout. The right of Figure 1 (cyan and gold boxes) contains our new approach to compute a model update. The common image gathers are not scanned directly but are first filtered using anisotropic diffusion. In the second step, these filtered gathers are scanned multiple times by our new auto picking routine. The equation builder creates distributed Jacobian matrix and vector elements on a Linux cluster. These are then index collected and solved in parallel using subroutines from the portable, extensible toolkit for scientific computation (PETSc) (Balay et al., 2001). The update vector is then used to refine the velocity model, and the whole loop is repeated until a satisfactory residual moveout is achieved.Velocity model and updating. The velocity function is input to the ray trace and traveltime calculations needed for migration and many types of models may be considered to represent this velocity function. We chose gridded models because grids have the imp...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.