Recent developments in deep learning have shown significant improvement in the accuracy of acoustic impedance inversion results. However, the conventional gradient-based optimizers such as root mean square propagation (RMSProp), momentum, adaptive moment estimation (ADAM), etc., used in the deep learning framework, inherently tend to converge at the nearest optimum point, thereby compromising the solution by not attaining the global minimum. We apply a hybrid global optimizer, genetic-evolutionary ADAM (GADAM) to address the issue of convergence at a local optimum in a semi-supervised deep sequential convolution network-based learning framework to solve the non-convex seismic impedance inversion problem. GADAM combines the advantages of adaptive learning of ADAM and genetic evolution of genetic algorithm (GA), which facilitates faster convergence, and avoids sinking into the local minima. The efficacy of GADAM is tested on synthetic benchmark data and field examples. The results are compared with that obtained from a widely used ADAM optimizer and conventional least-squares method. In addition, uncertainty analysis is performed to check the implication of the optimizer's choice in obtaining efficient and accurate seismic impedance values. Results show that the level of uncertainty and minima of loss function attained using the GADAM optimizer are comparatively lower than that for ADAM. Thus, the present study demonstrates that the hybrid optimizer, i.e., GADAM is more efficient than the extensively used ADAM optimizer in impedance estimation from seismic data in a deep learning framework.
<p><span>Deep multi-task learning is one of the most challenging research topics widely explored in the field of recognition of facial expression. Most deep learning models rely on the class labels details by eliminating the local information of the sample data which deteriorates the performance of the recognition system. This paper proposes multi-feature-based deep convolutional neural networks (D-CNN) that identify the facial expression of the human face. To enhance the accuracy of recognition systems, the multi-feature learning model is employed in this study. The input images are preprocessed and enhanced via three filtering methods i.e., Gaussian, Wiener, and adaptive mean filtering. The preprocessed image is then segmented using a face detection algorithm. The detected face is further applied with local binary pattern (LBP) that extracts the facial points of each facial expression. These are then fed into the D-CNN that effectively recognizes the facial expression using the features of facial points. The proposed D-CNN is implemented, and the results are compared to the existing support vector machine (SVM). The analysis of deep features helps to extract the local information from the data without incurring a higher computational effort.</span></p>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.