The face is a challenging object to be recognized and analyzed automatically by a computer in many interesting applications such as facial gender classification. The large visual variations of faces, such as occlusions, pose changes, and extreme lightings, impose great challenge for these tasks in real world applications. This paper explained the fast transfer learning representations through use of convolutional neural network (CNN) model for gender classification from face image. Transfer learning aims to provide a framework to utilize previously-acquired knowledge to solve new but similar problems much more quickly and effectively. The experimental results showed that the transfer learning method have faster and higher accuracy than CNN network without transfer learning.
Emotion recognition through facial images is one of the most challenging topics in human psychological interactions with machines. Along with advances in robotics, computer graphics, and computer vision, research on facial expression recognition is an important part of intelligent systems technology for interactive human interfaces where each person may have different emotional expressions, making it difficult to classify facial expressions and requires training data. large, so the deep learning approach is an alternative solution., The purpose of this study is to propose a different Convolutional Neural Network (CNN) model architecture with batch normalization consisting of three layers of multiple convolution layers with a simpler architectural model for the recognition of emotional expressions based on human facial images in the FER2013 dataset from Kaggle. The experimental results show that the training accuracy level reaches 98%, but there is still overfitting where the validation accuracy level is still 62%. The proposed model has better performance than the model without using batch normalization.
Image interpolation is the most basic requirement for many image processing tasks such as medical image processing. Image interpolation is a technique used in resizing an image. To change the image size, each pixel in the new image must be remapped to a location in the old image to calculate the new pixel value. There are many algorithms available for determining the new pixel value, most of which involve some form of interpolation between the closest pixels in the old image. In this paper, we use the Bicubic interpolation algorithm to change the size of medical images from the Messidor dataset and then analyze it by measuring it using three parameters Mean Square Error (MSE), Root Mean Squared Error (RMSE), and Peak Signal-to-Noise Ratio (PSNR), and compare the results with Bilinear and Nearest-neighbor algorithms. The results showed that the Bicubic algorithm is better than Bilinear and Nearest-neighbor and the larger the image dimensions are resized, the higher the degree of similarity to the original image, but the level of computation complexity also increases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.