Human convey their message in different forms. Expressing their emotions and moods through their facial expression is one of them. In this work, to avoid the traditional process of feature extraction (Geometry based method, Template based method, and Appearance based method), CNN model is used as a feature extractor for emotion detection using facial expression. In this study we also used three pre-trained models VGG-16, ResNet-50, Inception-V3. This Experiment is done on Fer-2013 facial expression dataset and Cohn Extended (CK+) dataset. By using FER-2013 dataset the accuracy rates for CNN, ResNet-50, VGG-16 and Inception-V3 are 76.74%, 85.71%, 85.78%s, 97.93% respectively. Similarly, the experimental results using CK+ dataset showed the accuracy rates for CNN, ResNet- 50, VGG-16 and Inception-V3 are 84.18%, 92.91%, 91.07%, and 73.16% respectively. The experimental results showed exceptional results for Inception-V3 with 97.93% using FER-2013 dataset and ResNet-50 with 91.92% using CK+ dataset.