The primary objective of this study is to develop a real-time system that can predict the emotional states of an individual who commonly undergoes various experiences. The primary methodology suggested in this research for detecting facial expressions involves the integration of transfer learning techniques that incorporate convolutional neural networks (CNNs), along with a parameterization approach that minimizes the number of parameters. The FER-2013, JAFFE, and CK+ datasets were jointly used to train the CNN architecture for real-time detection, which broadened the range of emotional expressions that may be recognized. The proposed model has the capability to identify various emotions, including but not limited to happiness, fear, surprise, anger, contempt, sadness, and neutrality. Several methods were employed to assess the efficacy of the model's performance in this study. The experimental results indicate that the proposed approach surpasses previous studies in terms of both speed and accuracy.