Accurate well test data acquired throughout an appropriately designed test program is critical to confidently characterize a reservoir. Achieving this requires the right DST string selection or completion and stimulation designs, surface test set up / facilities, and the ability to rapidly handle dynamic changes in flow regimes. Well testing is inherently complex due to the interaction between these various elements. "Successful failures" in well testing is unfortunately not uncommon and results from each element of a test performing as per standards, but losing focus on achieving the ultimate objectives. The remote participation in operations of expertise that designed the test is becoming increasingly important in achieving test objectives, in particular in geologically complex structures, low-deliverability formations, reservoirs with high flow rate wells, or environmentally challenging conditions.Technological developments have enabled improved monitoring and controlling of advanced well testing equipment often now with multiple data acquisition systems. For increased accuracy in highly dynamic test environments, well tests are also performed using multiphase flow metering alongside separators specifically designed to increase handling and separation efficiencies with separate acquisition systems. Remotely located experts are now able to validate and evaluate data in real-time, 24/7, with flexibility to change acquisition and test programs to ensure that objectives are achieved.Real-time access and remote connectivity were provided during gas condensate testing of three North Sea wells in the Grove field. This paper describes the value of real-time services where the test program required continuous data quality assurance and rapid real-time evaluation onshore. This paper also demonstrates how real-time collaboration of personnel at multiple sites was crucial in carrying out a successful pressure transient analysis and a complete interpretation, both of which helped achieve the test program objectives within the planned test duration.
The Chestnut field is located in Block 22/2a in the central North Sea. The field, with water depths to 120 m, is approximately 180 km east of Aberdeen, Scotland. Chestnut was first commercially produced in September 2008 by Centrica Energy (formerly Venture Petroleum) using two subsea wells (a horizontal oil producer and a water injection well) tied into a floating production, storage, and offloading (FPSO) vessel. Water injection was required almost immediately because the oil was saturated. A second oil producing well was spudded in September 2008, targeting the South Chestnut field. This well, 22/2a-16Y, was tied into the same flowline and riser as the existing oil producer. A venturi-type downhole flowmeter was installed in well 22/2a-16Y to obtain continuous pressure, temperature, and flow rate measurements. The production from the other well could then be calculated by subtracting the venturi flowmeter measurements from the total rate measurements made at the FPSO. Venturi-type downhole flowmeters are, strictly speaking, only applicable in liquid environments because the Bernoulli principle is valid only for single-phase flow and is tenable only in low-slip liquid-liquid flow regimes, such as in the concurrent flow of oil and water at high velocities. Because the Chestnut oil is saturated, it was known that free gas would be seen at the intake of the venturi because the flowing pressure would, by definition, be below the bubblepoint. To address the challenges caused by two-phase flow through the flowmeter, a workflow was developed that would first assess the quantity and affect of the free gas in the venturi device. The workflow was then developed to increase the accuracy of the flowmeter in the two-phase oil-gas flowing conditions. The enhanced flow calculations were then validated by using FPSO test separator data when only the flowmeter-equipped well was producing. The enhanced model improved the accuracy of the liquid-rate predictions across various rates from initial discrepancies of 40% to 190%, to less than 5%, allowing Centrica Energy to achieve its well- and reservoir-monitoring objectives. The use of venturi-type flowmeters has traditionally been limited to applications in which only liquid is flowing through the meter. This present case study shows that customized workflows can improve the accuracy of the venturi flowmeter measurements in multiphase environments, making these downhole flowmeters a cost effective alternative to true multiphase meters for certain applications.
Summary This paper presents a gridding study relating to reservoir simulation of a giant, complex, low-permeability carbonate reservoir developed with 75 ultraong horizontal wells in a densely spaced alternating injector/producer pattern. The lateral magnitude of the Al Shaheen field in Qatar and the radial layout of the multiple ultralong horizontal wells in the field posed a challenge in modeling of individual well performance using a manageable grid size with an acceptable run time for history matching. Reservoir modeling was complicated further by the complex reservoir characteristics with a tilting free-water level (FWL), separate gas caps, large lateral variations in oil properties, and wettability-dependent flow characteristics. These features had to be incorporated into the initialization and dynamic modeling of the reservoir, which added further to the memory requirements of the simulation model. This paper describes the process of selecting a suitable simulation grid for history matching the performance of this reservoir on a full-field basis. Conventional Cartesian gridding techniques, including the use of local grid refinements (LGRs) in areas of interest, were pursued initially but were shown to be inadequate for full-field modeling of this complex reservoir. The gridding problem was solved by the use of 2.5D perpendicular-bisector (PEBI) grids around each of the horizontal wells in the field. This allowed for sufficient resolution between wells and also aligned the grid with the well paths, thereby avoiding grid nonorthogonality issues. The efficiency of the PEBI model was also demonstrated by the comparison of CPU performances. Run times for the full-field PEBI model were equivalent to that of a conventional Cartesian model with suitable local grids covering only 20% of the wells. Both models had approximately 700,000 active cells and required 3-4 GB of memory. A full-field model relying on conventional LGRs around all wells was not built because it would involve significantly more grid cells and, therefore, would become considerably slower and require more memory.
This paper describes application of Project INTERSECT, a next generation highly scalable reservoir simulator on real large scale field models. High resolution reservoir simulation is required to better define and describe fluid flow and enable improved field development and tactical operational planning. Massively parallel computing techniques overcome limitations of problem size and space resolution.This paper demonstrates that large-scale simulation models can be performed on commodity hardware taking advantage of evolution in multi-cpu hardware architecture and software engineering. This allows both geologists and reservoir engineers to include more realistic geologic and engineering detail for better and more reliable production optimization. Intense computer simulation is essential for effective reservoir management. The advances in reservoir characterization techniques and the industry drive towards the ‘smart oilfield’ with rapid model updates will require more efficient model processing to achieve timely field operational decisions. Parallel reservoir simulators have the potential to solve larger, more realistic problems than previously possible. The size and application of reservoir simulation problems have been limited by the availability of computing hardware, reservoir simulation architecture and of solution methods for solving large-scale heterogeneous problems. The next generation reservoir simulator demonstrates that key modeling challenges has been overcome by a software architecture and capability to model more realistic subsurface and surface models. Applications of the new reservoir simulator illustrates how typical reservoir engineering options such as local grid refinement, local grid coarsening, multilateral wells and aquifer modeling affect the overall parallel performance and scalability using highly heterogeneous large-scale models.Application of new modeling techniques highlight increased accuracy of modeling results and more reliable field development planning and reservoir management decisions. Introduction To generate higher returns on capital employed, the oil and gas industry must follow a two-pronged strategy: reduce the cost of finding and developing new reservoirs while improving production performance for existing reservoirs. The evolution of hardware and software is increasing rapidly in the energy sector as personnel involved in field developments need to keep up with trends in order to make fit-for-purpose decisions for long-term operational designs and short-term tactical planning. Modern petroleum reservoir simulation requires simulating high resolution and detailed geological models. The advent of cluster computing relies on accurate and efficient model based computing, such modeling is primarily performed on detailed models representing flow in permeable media. Future production depends on large scale computational efficiency to enable enhanced reservoir characterization and adoption of new oil recovery technologies. Over the last twenty years high performance computing has had a significant impact on the evolution of numerical predictive methods throughout science and engineering. In particular, petroleum engineering applications has seen a significant enhancement in capabilities for reservoir simulation engineering. The complexity of geological and reservoir simulation models has led to computational requirements that have consistently challenged the fastest hardware platforms. Fig. 1 illustrates the general trend seen in simulation model grid resolution as seen by the oil and gas industry over the last 30 years. The increase in grid resolution is clearly linked to the advance in computer hardware technology and the price/performance of the overall hardware platforms. Early hardware platforms were largely based on mainframes that provided efficient processing; however, it only enabled coarse models. The emergence of workstations in the late 80s not only made computing hardware more accessible to the engineer, but also enabled more refined models that more closely resembled geological models. The evolution of workstations towards cluster computing emerged as reservoir characterization and up-scaling tools become more advanced and more easily accessible to the engineer. This enabled a step change in grid resolution as existing simulator technologies were migrated towards taking advantage of parallel processing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.