Facial expression detection has long been regarded as both verbal and nonverbal communication. The muscular expression on a person's face reflects their physical and mental state. Using computer programming to integrate all face curves into a categorization class is significantly more important than doing so manually. Convolutional Neural Networks, an Artificial Intelligence approach, was recently developed to improve the task with more acceptance. Due to overfit during the learning step, the model performance may be lowered and regarded underperforming. There is a method dropout uses to reduce testing error. The influence of dropout is applied at convolutional layers and dense layers to classify face emotions into a distinct category of Happy, Angry, Sad, Surprise, Neutral, Disgust, and Fear and is represented as an improved convolutional neural network model. The experimental setup used the datasets namely JAFEE, CK48, FER2013, RVDSR, CREMAD and a selfprepared dataset of 36,153 facial images for observing train and test accuracy in presence and absence of dropout. Test accuracies of 92.33, 96.50, 97.78, 99.44, and 98.68 are obtained on Fer2013, RVDSR, CREMA-D, CK48, and JAFFE datasets are obtained in presence of dropout. The used features are countably large in the computation as a result the higher computation support of NVDIA with the capacity of GPU 16GB, CPU 13GB and memory 73.1 GB are used for the experimental purposes.