2017
DOI: 10.1016/j.patcog.2016.12.002
|View full text |Cite
|
Sign up to set email alerts
|

A dynamic framework based on local Zernike moment and motion history image for facial expression recognition

Abstract: A dynamic descriptor facilitates robust recognition of facial expressions in video sequences. The current two main approaches to the recognition are basic emotion recognition and recognition based on facial action coding system (FACS) action units. In this paper we focus on basic emotion recognition and propose a spatiotemporal feature based on local Zernike moment in the spatial domain using motion change frequency. We also design a dynamic feature comprising motion history image and entropy. To recognise a f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
28
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 63 publications
(29 citation statements)
references
References 43 publications
1
28
0
Order By: Relevance
“…For instance, Liu et al [29] present an expressionletbased spatio-temporal manifold descriptor which shows the superiority over traditional methods on FER tasks. Fan and Tjahjadi [30] provide a spatio-temporal feature based on local Zernike moment and motion history image for dynamic FER. Yan [31] proposes a collaborative discriminative multi-metric learning for FER in video sequences.…”
Section: A Hand-designed Feature-based Methodsmentioning
confidence: 99%
“…For instance, Liu et al [29] present an expressionletbased spatio-temporal manifold descriptor which shows the superiority over traditional methods on FER tasks. Fan and Tjahjadi [30] provide a spatio-temporal feature based on local Zernike moment and motion history image for dynamic FER. Yan [31] proposes a collaborative discriminative multi-metric learning for FER in video sequences.…”
Section: A Hand-designed Feature-based Methodsmentioning
confidence: 99%
“…The face segmentation model, based on geometric information, defines the most appropriate layout for extracting face features in order to recognize expression. Assuming that face regions are well aligned; histogram-like features are often computed from equal-sized face grids [25]. However, apparent misalignment can be observed, primarily caused by face deformations induced by the expression itself.…”
Section: Face Segmentation Modelsmentioning
confidence: 99%
“…Thus, such models are robust against head pose variation and registration errors. The efforts of [5,22,52], and [53] involve the division of face into multiple blocks and encoding face as the concatenation of individual block representation. In FACS, each facial muscle movements was associated with an AU and the combination of certain AUs was considered for expression classification.…”
Section: Related Workmentioning
confidence: 99%