Deep learning is reaching new heights as a result of its cutting-edge performance in a variety of fields, including computer vision, natural language processing, time series analysis, and healthcare. Deep learning is implemented using batch and stochastic gradient descent methods, as well as a few optimizers; however, this led to subpar model performance. However, there is now a lot of effort being done to improve deep learning’s performance using gradient optimization methods. The suggested work analyses convolutional neural networks (CNN) and deep neural networks (DNN) using several cutting-edge optimizers to enhance the performance of architectures. This work uses specific optimizers (SGD, RMSprop, Adam, Adadelta, etc.) to enhance the performance of designs using different types of datasets for result matching. A thorough report on the optimizers’ performance across a variety of architectures and datasets finishes the study effort. This research will be helpful to researchers in developing their framework and appropriate architecture optimizers. The proposed work involves eight new optimizers using four CNN and DNN architectures. The experimental results exploit breakthrough results for improving the efficiency of CNN and DNN architectures using various datasets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.