Rapid growth in advanced human-computer interaction (HCI) based applications has led to the immense popularity of facial expression recognition (FER) research among computer vision and pattern recognition researchers. Lately, a robust texture descriptor named Dynamic Local Ternary Pattern (DLTP) developed for face liveness detection has proved to be very useful in preserving facial texture information. The findings motivated us to investigate DLTP in more detail and examine its usefulness in the FER task. To this end, a FER pipeline is developed, which uses a sequence of steps to detect possible facial expressions in a given input image. Given an input image, the pipeline first locates and registers faces in it. In the next step, using an image enhancement operator, the FER pipeline enhances the facial images. Afterward, from the enhanced images, facial features are extracted using the DLTP descriptor. Subsequently, the pipeline reduces dimensions of the high-dimensional DLTP features via Principal Component Analysis (PCA). Finally, using the multi-class Kernel Extreme Learning Machine (K-ELM) classifier, the proposed FER scheme classifies the features into facial expressions. Extensive experiments performed on four in-the-lab and one in-the-wild FER datasets confirmed the superiority of the method. Besides, the cross-dataset experiments performed on different combinations of the FER datasets revealed its robustness. Comparison results with several state-of-the-art FER methods demonstrate the usefulness of the proposed FER scheme. The pipeline with a recognition accuracy of 99.76%, 99.72%, 93.98%, 96.71%, and 78.75%, respectively, on the CK+, RaF, KDEF, JAFFE, and RAF-DB datasets, outperformed the previous state-of-the-art.INDEX TERMS Facial expression recognition, dynamic local ternary pattern, principal component analysis, kernel extreme learning machine, cross-dataset, cross-validation.