Deep learning is widespread over different fields like health industries, voice recognition, image & video classification, real-time rendering applications, face recognition and many other domains too. Fundamentally Deep Learning is used due to the three different aspects. The first one is its ability to perform better with a huge amount of data for training, second is high computational speed, and third is the elevation of deep training at various levels of reflection and depiction. Acceleration of Deep Machine Learning requires a platform for immense performance; this needs accelerated hardware for training convoluted deep learning problems. While training large datasets on deep learning that takes hours, days, or weeks, accelerated hardware that decreased the overload of computation task can be used. The main attention of all the research studies is to optimize the results of predictions in terms of accuracy, error rate and execution time. Graphical Processing Unit (GPU) is one of the accelerated hardware that has currently prevailed to decrease the training time due to its parallel architecture. In this research paper, the multi-level or Deep Learning approach is simulated over Central Processing Unit (CPU) and GPU. Different research claims that GPUs deliver accurate results with a maximum rate of speed. MATLAB is the framework used in this work to train the Deep Learning network for predicting Ground Water Level using a dataset of three different parameters Temperature, Rainfall, and Water requirement. Thirteen year’s dataset of Faridabad District of Haryana from the year 2006 to 2018 is used to train, verify, test and analyzed the network over CPU and GPU. The training function used was the trailm for training the network over CPU and trainscg for GPU training as it does not support Jacobian training. From our results, it is concluded that for large dataset the accuracy of training increased with GPU and processing time for training is decreased when compared to CPU. Overall performance improves while training the network over GPU and suits to be a better method for predicting the Water Level. The proficiency estimation of the network shows the maximum regression value, least Mean Square Error (MSE), and highperformance value for GPU during the training.
<p>A multitude of research has been rising for predicting the behavior of different real-world problems through machine learning models. An erratic nature occurs due to the augmented behavior and inadequacy of the prerequisite dataset for the prediction of water level over different fundamental models that show flat or low-set accuracy. In this paper, a powerful scaling strategy is proposed for improvised back-propagation algorithm using parallel computing for groundwater level prediction on graphical processing unit (GPU) for the Faridabad region, Haryana, India. This paper aims to propose the new streamlined form of a back-propagation algorithm for heterogeneous computing and to examine the coalescence of artificial neural network (ANN) with GPU for predicting the groundwater level. twenty years of data set from 2001-2020 has been taken into consideration for three input parameters namely, temperature, rainfall, and water level for predicting the groundwater level using parallelized backpropagation algorithm on compute unified device architecture (CUDA). This employs the back-propagation algorithm to be best suited to reinforce learning and performance by providing more accurate and fast results for water level predictions on GPUs as compared to sequential ones on central processing units (CPUs) alone.</p>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.