The sheer size of the 2.7-billion-barrel field and expected operations of more than 50 years, make Johan Sverdrup an exciting place to develop the solutions of the future. As such, the Johan Sverdrup field development has been called the digital flagship for the operator. Being a ‘flagship’ means Johan Sverdrup is not only meant to be a vehicle for digital innovation to improve safety, value-creation and carbon efficiency for the field itself, but the field development is also meant to drive digital solutions and ways of working that have the potential to be scaled-up for the benefit of the operator as a whole.
This paper starts by setting out the main digital focus areas for the Johan Sverdrup development, but drills down on exploring the areas where Johan Sverdrup is demonstrating real and tangible impacts already today. Examples include efforts to mature technology for automatic production optimization, a number of new pipe and seabed technology solutions (including fiber-optic PRM, fiber-optic monitoring of wells), and the step-wise development of a digital twin for Johan Sverdrup that will gives the opportunity to model and visualize key parts of the field.
The paper describes the ‘digital field-worker’ at Johan Sverdrup which is changing the way of working, both during the construction and completion phase, but also during operations. This ranges from efforts to automate and digitalize work processes offshore to more innovative solutions. Examples of the latter include Echo – a multi-player digital twin solution allowing real-time visualization and collaboration between onshore and offshore – as well as the iterative development of anomaly detection models (utilizing machine learning and artificial intelligence) to move from condition-based monitoring towards predictive maintenance.
Deep learning applications are drastically progressing in seismic processing and interpretation tasks. However, the majority of approaches subsample data volumes and restrict model sizes to minimise computational requirements. Subsampling the data risks losing vital spatio-temporal information which could aid training whilst restricting model sizes can impact model performance, or in some extreme cases, renders more complicated tasks such as segmentation impossible. This paper illustrates how to tackle the two main issues of training of large neural networks: memory limitations and impracticably large training times. Typically, training data is preloaded into memory prior to training, a particular challenge for seismic applications where data is typically four times larger than that used for standard image processing tasks (float32 vs. uint8). Using a microseismic use case, we illustrate how over 750 GB of data can be used to train a model by using a data generator approach which only stores in memory the data required for that training batch. Furthermore, efficient training over large models is illustrated through the training of a 7-layer UNet with input data dimensions of 4096×4096 (approximately 7.8 M parameters). Through a batch-splitting distributed training approach, training times are reduced by a factor of four. The combination of data generators and distributed training removes any necessity of data 1 subsampling or restriction of neural network sizes, offering the opportunity of utilisation of larger networks, higher-resolution input data or moving from 2D to 3D problem spaces.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.