Facial emotion recognition is a crucial process for many applications and is still unresolved. Historically, emotional recognition has usually been achieved through artificial intelligence techniques such as Convolutional Neural Networks. However, this approach is quite expensive in terms of computational power and complexity. To alleviate this problem, we propose a lightweight CNN for facial emotion recognition, called Custom Lightweight CNN-based Model (CLCM), based on the well-known MobileNetV2 architecture. Performance was evaluated on four public datasets, FER-2013, RAF-DB, AffectNet, and CK+, where seven facial emotions were detected. The CLCM model was compared to the well-known MobileNetV2 and ShuffleNetV2 architectures. The CLCM performance results were close to or better than the more complex models. Specifically, in the FER-2013 dataset, CLCM achieved an accuracy of 63%, while MobileNetV2 was 58%, and ShuffleNetV2 65%. In the RAF-DB dataset, CLCM showed 84%, MobileNetV2 73%, and ShuffleNetV2 80%. Lastly, in the AffectNet dataset, MobileNetV2 and ShuffleNetV2 showed an accuracy of 57%, while CLCM showed 54%. The results obtained from this study establish CLCM as one of the efficient models in FER. Although CLCM is a smaller model in terms of parameters (2.3 Million) compared to MobileNetV2 (3.5 Million) and ShuffleNetV2 (3.9 Million), it showed good results in almost all analyses. The reduced CLCM's computational power allows its application in enhanced human-computer interaction, affective computing, and personalized user experience, especially for real-world scenarios such as psychological and medical assessment, automotive (real-time driver emotional state), and vulnerable individual care where limited resource systems are commonly employed and real-time and reliable response are strongly recommended.