Facial recognition is one of many popular and difficult tasks in computer vision. A variety of research have been conducted on this subject, each of which suggests a stand-alone approach. While many studies strive for more accuracy, this study research aims to increase the efficiency of human computers by classifying emotions based on human faces using a self-based neural network. The usage of a Convolutional Neural Network (CNN) based on the Visual Geometry Group - 19 (VGG-19) classification model, which has been employed in ImageNet data sets and improved for emotion classification, is proposed in this paper. The classification process was conducted using FER-2013 dataset, which consists of over 35,000 facial images captured in various settings and contains 7 different emotions. The dataset was divided into three subsets, with 80% allocated for training, 10% for validation, and 10% for testing. With an accuracy of 71.80%, the proposed technique surpasses most self-based models.