Groundwater level (GWL) refers to the depth of the water table or the level of water below the Earth’s surface in underground formations. It is an important factor in managing and sustaining the groundwater resources that are used for drinking water, irrigation, and other purposes. Groundwater level prediction is a critical aspect of water resource management and requires accurate and efficient modelling techniques. This study reviews the most commonly used conventional numerical, machine learning, and deep learning models for predicting GWL. Significant advancements have been made in terms of prediction efficiency over the last two decades. However, while researchers have primarily focused on predicting monthly, weekly, daily, and hourly GWL, water managers and strategists require multi-year GWL simulations to take effective steps towards ensuring the sustainable supply of groundwater. In this paper, we consider a collection of state-of-the-art theories to develop and design a novel methodology and improve modelling efficiency in this field of evaluation. We examined 109 research articles published from 2008 to 2022 that investigated different modelling techniques. Finally, we concluded that machine learning and deep learning approaches are efficient for modelling GWL. Moreover, we provide possible future research directions and recommendations to enhance the accuracy of GWL prediction models and improve relevant understanding.
In this paper, we present a new Multiple learning to prediction algorithm model model that used three different combinations of machine-learning methods to improve the accuracy of the α-β filter algorithm. The parameters of α and β were tuned in dynamic conditions instead of static conditions. The proposed system was designed to use the deep belief network (DBN), the deep extreme learning machine (DELM), and the SVM as three different learning algorithms. Then these learned parameters were trained by the machine-learning algorithms tuned to the α-β filter algorithm as a prediction module, and they gave the final predicted results. The MAE and RMSE were used to evaluate the performance of the proposed α-β filter with different learning algorithms. Each algorithm recorded different best-case accuracy results; for the DBN, we achieved 3.60 and 2.61; for the DELM, we obtained the best-case result of 3.90 and 2.81; and finally, for the SVM, 4.0 and 3.21 were attained in terms of the RMSE and MAE, respectively, as compared to 5.21 and 3.95. When assessed in comparison with the typical alpha–beta filter algorithm, the proposed system provided results with better accuracy.
Multi-cell mobility model and performance analysis for wireless cellular networks are presented. The mobility model plays an important role in characterizing different mobility-related parameters such as handoff call arrival rate, blocking or dropping probability, and channel holding time. We present a novel tractable multi-cell mobility model for wireless cellular networks under the general assumptions that the cell dwell times induced by mobiles' mobility and call holding times are modeled by using a general distribution instead of exponential distribution. We propose a novel generalized closed-form matrix formula to support the multi-cell mobility model and call holding time with general distributions. This allows us to develop a fixed point algorithm to compute loss probabilities, and handoff call arrival rate under the given assumptions. In order to reduce computational complexity of the fixed point algorithm, the channel holding time of each cell is down-modeled into an exponentially distributed one for purposes of simplification, since the service time is insensitive in computing loss probabilities of each cell due to Erlang insensitivity. The accuracy of the multi-cell analytic mobility model is supported by the comparison of the simulation results and the analytic ones.
The alpha–beta filter algorithm has been widely researched for various applications, for example, navigation and target tracking systems. To improve the dynamic performance of the alpha–beta filter algorithm, a new prediction learning model is proposed in this study. The proposed model has two main components: (1) the alpha–beta filter algorithm is the main prediction module, and (2) the learning module is a feedforward artificial neural network (FF‐ANN). Furthermore, the model uses two inputs, temperature sensor and humidity sensor data, and a prediction algorithm is used to predict actual sensor readings from noisy sensor readings. Using the novel proposed technique, prediction accuracy is significantly improved while adding the feed‐forward backpropagation neural network, and also reduces the root mean square error (RMSE) and mean absolute error (MAE). We carried out different experiments with different experimental setups. The proposed model performance was evaluated with the traditional alpha–beta filter algorithm and other algorithms such as the Kalman filter. A higher prediction accuracy was achieved, and the MAE and RMSE were 35.1%–38.2% respectively. The final proposed model results show increased performance when compared to traditional methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.