Despite recent efforts, digitisation in rock engineering still suffers from the difficulty in standardising and statistically analysing databases that are created by a process of quantification of qualitative assessments. Indeed, neither digitisation nor digitalisation have to date been used to drive changes to the principles upon which, for example, the geotechnical data collection process is founded, some of which have not changed in several decades. There is an empirical knowledge gap which cannot be bridged by the use of technology alone. In this context, this paper presents the results of what the authors call a rediscovery of rock mass classification systems, and a critical review of their definitions and limitations in helping engineers to integrate these methods and digital acquisition systems. This discussion has significant implications for the use of technology as a tool to directly determine rock mass classification ratings and for the application of machine learning to address rock engineering problems.
This paper presents a philosophical examination of classical rock engineering problems as the basis to move from traditional knowledge to radical (innovative) knowledge. While this paper may appear abstract to engineers and geoscientists more accustomed to case studies and practical design methods, the aim is to demonstrate how the analysis of what constitutes engineering knowledge (what rock engineers know and how they know it) should always precede the integration of new technologies into empirical disciplines such as rock engineering. We propose a new conceptual model of engineering knowledge that combines experience (practical knowledge) and a priori knowledge (knowledge that is not based on experience). Our arguments are not a critique of actual engineering systems, but rather a critique of the (subjective) reasons that are invoked when using those systems, or to defend conclusions achieved using those systems. Our analysis identifies that rock engineering knowledge is shaped by cognitive biases, which over the years have created a sort of dogmatic barrier to innovation. It therefore becomes vital to initiate a discussion on the subject of engineering knowledge that can explain the challenges we face in rock engineering design at a time when digitalisation includes the introduction of machine algorithms that are supposed to learn from conditions of limited information.
In observational method projects in geotechnical engineering, the final geotechnical design is decided upon during actual construction, depending on the observed behavior of the ground. Hence, engineers must be prepared to make crucial decisions promptly, with few available guidelines. In this paper, we propose coupling numerical analysis with machine learning (ML) algorithms for enhancing the decision process in observational method projects. The proposed methodology consists of two main computational steps: (1) data generation, where multiple numerical models are automatically generated according to the anticipated range of input parameters, and (2) data analysis, where input parameters and model results are analyzed with ML models. Using the case study of the Semel tunnel in Tel Aviv, Israel, we demonstrate how this computational process can contribute to the success of observational method projects through (1) the computation of feature importance, which can assist with better identifying the key features that drive failure prior to project execution, (2) providing insights regarding the monitoring plan, as correlative relationships between various results can be tested, and (3) instantaneous predictions during construction.
Numerical modeling is increasingly used to analyze practical rock engineering problems. The geological strength index (GSI) is a critical input for many rock engineering problems. However, no available method allows the quantification of GSI input parameters, and engineers must consider a range of values. As projects progress, these ranges can be narrowed down. Machine learning (ML) algorithms have been coupled with numerical modeling to create surrogate models. The concept of surrogate models aligns well with the deductive nature of data availability in rock engineering projects. In this paper, we demonstrated the use of surrogate models to analyze two common rock slope stability problems: (1) determining the maximum stable depth of a vertical excavation and (2) determining the allowable angle of a slope with a fixed height. Compared with support vector machines and K-nearest algorithms, the random forest model performs best on a data set of 800 numerical models for the problems discussed in the paper. For all these models, regression-type models outperform classification models. Once the surrogate model is confirmed to preform accurately, instantaneous predictions of maximum excavation depth and slope angle can be achieved according to any range of input parameters. This capability is used to investigate the impact of narrowing GSI range estimation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.