Face and Gesture 2011 2011
DOI: 10.1109/fg.2011.5771366
|View full text |Cite
|
Sign up to set email alerts
|

Emotion recognition using PHOG and LPQ features

Abstract: Abstract-We propose a method for automatic emotion recognition as part of the FERA 2011 competition [1] . The system extracts pyramid of histogram of gradients (PHOG) and local phase quantisation (LPQ) features for encoding the shape and appearance information. For selecting the key frames, kmeans clustering is applied to the normalised shape vectors derived from constraint local model (CLM) based face tracking on the image sequences. Shape vectors closest to the cluster centers are then used to extract the sh… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
99
0

Year Published

2012
2012
2022
2022

Publication Types

Select...
5
3
1

Relationship

3
6

Authors

Journals

citations
Cited by 187 publications
(99 citation statements)
references
References 25 publications
0
99
0
Order By: Relevance
“…To make further analysis memory and computationally inexpensive, a key-interest point selection approach, similar to concept frame selection in affect analysis [29], is used. For a video V, a total of K detected interest points are clustered using the Approximate Nearest Neighbour (ANN) algorithm and Fig.…”
Section: B Holistic Body Movementmentioning
confidence: 99%
“…To make further analysis memory and computationally inexpensive, a key-interest point selection approach, similar to concept frame selection in affect analysis [29], is used. For a video V, a total of K detected interest points are clustered using the Approximate Nearest Neighbour (ANN) algorithm and Fig.…”
Section: B Holistic Body Movementmentioning
confidence: 99%
“…An adequate facial representation is central for effective affect recognition as the classification performance is limited by the quality and relevance of the features used in the represen-Discrete Continuous Global Local Global Local P [19], [40], [17] [1], [38], [33], [36], [28], [34], [13], [OurWork] --N [21], [14] [9], [6], [24], [31], [5], [14] [25] [32], [14], [8], [25][30], [OurWork] Table 1: Summary of appearance representations used for affect recognition. The representations are categorised by type (local vs. global), nature of data (naturalistic, N vs. posed, P) and affect modeling (discrete vs. continuous).…”
Section: Introductionmentioning
confidence: 99%
“…SVMs have been widely used in the literature to model classification problems including facial expression recognition [30], [34], [19]. Provided a set of training samples, an SVM transforms the data samples using a non-linear mapping to a higher dimension with the aim to determine a hyperplane that partitions the data by class or labels.…”
Section: Algorithm 1: Compute Hdtpmentioning
confidence: 99%
“…In addition, facial expressions have been analysed [16] and classified [17] using TS imaging. Commonly, VS imaging has been used for modelling affective computing problems such as depression [18], emotion [19], and pain analysis [20]. However from our understanding, the literature has not developed computational models for stress recognition using both TS and VS imaging together yet.…”
Section: Introductionmentioning
confidence: 99%