Educational learning settings exploit cognitive factors as ultimate feedback to enhance personalization in teaching and learning. But besides cognition, the emotions of the learner which reflect the affective learning dimension also play an important role in the learning process. The emotions can be recognized by tracking explicit behaviors of the learner like facial or vocal expressions. Despite reasonable efforts to recognize emotions, the research community is currently constraints by two issues, namely : i) the lack of efficient feature descriptors to accurately represent and prospectively recogniz e (detecting) the emotions of the learner ; ii) lack of contextual datasets to benchmark performances of emotion recognizers in the learning - speci fic scenarios, resulting in poor generalizations. This paper presents a facial emotion recognition technique (FERT). The FERT is realized through results of preliminary analysis across various facial feature descriptors. Emotions are classified using the m ultiple kernel learning (MKL) method which reportedly possesses good merits. A contextually relevant simulated learning emotion ( SLE ) dataset is introduced to validate the FERT scheme. Recognition performance of the FERT scheme generalizes to 90.3% on the SLE dataset. On more popular but noncontextually datasets, the scheme achi e ved 90.0% and 82.8% respectively extended Cohn Kanade (CK+) and acted facial expressions in the wild ( AFEW ) datasets. A test for the null hypothesis that there is no significant difference in the performances accuracies of the descriptors rather proved otherwise (<em> x<sup>2</sup></em> = 14 . 619 , <em>df</em> = 5 , <em>p</em> = 0 . 01212 ) for a model considered at a 95% confidence interval.