Applications of seismic measurements for the prediction of hazard zones are applied practice in many tunnel drives in rock mass today. Next to a large exploration range and accurate localisation of discontinuities, seismic data provide attributes for a comprehensive characterisation of the ground conditions. A good synchronisation of all technical components is required to obtain optimum data quality and quantity while the tunnel excavation is not obstructed thereby. Firstly, the signal source must feed as much energy as possible into the rock in a very short time. Secondly, continuity of the signal generation with constant quality and its precise timing by means of wireless data transmission also ensure a reliable measurement process. Artificial intelligence is used to determine the quality of the recorded data already in the tunnel and feedback is given to the user keeping the data quality high. From the tunnel site, recorded raw data can be transferred to a cloud, from where an authorised processor collects them, wherever in the world. An immediately started data processing delivers a result within an hour that includes a geological forecast of up to 150 m of heading, depending on the rock mass condition. In addition to data quality, the quality of the results is crucial. Therefore, techniques are currently under development using machine learning to correlate and analyse seismic attributes with geological properties. This should lead to a more objective evaluation of the geological forecast in the future.
Research has demonstrated that machine learning algorithms (MLAs) are a powerful addition to the rock engineering toolbox, and yet they remain a largely untapped resource in engineering practice. The reluctance to adopt MLAs as part of standard practice is often attributed to the ‘opaque’ nature of the algorithms, the complexity in developing them, and the difficulty in determining how the algorithms use the datasets. This article presents tools and processes for developing MLAs, input selection, and data balancing for practical underground rock engineering. MLAs for classification and regression – two main machine learning applications – are presented in terms of developing MLA to extract information from the dataset to obtain the desired output. Engineering verification metrics are selected based on their suitability for specific output. Methods for input selection and data balancing are discussed with a focus on selecting appropriate input data for the problem without introducing bias or excess complexity. Each tool and process for algorithm development, data preparation, and input selection is illustrated with a case study. This article demonstrates that geotechnical practitioners can extract additional value by applying MLAs to rock engineering problems. Once an understanding of the functions of MLAs is reached, the building blocks and open‐source code are available to be adapted to suit the rock mass behaviour of interest.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.