2011
DOI: 10.1007/978-3-642-22822-3_28
|View full text |Cite
|
Sign up to set email alerts
|

Appearance-Based Smile Intensity Estimation by Cascaded Support Vector Machines

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
6
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
4
1
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 4 publications
1
6
0
Order By: Relevance
“…The rates offered by this author are remarkable and similar to those offered by other authors like Shimada 30 . He proposed a method that combines a new LBP based approach and local intensity histograms.…”
Section: Discussionsupporting
confidence: 87%
See 1 more Smart Citation
“…The rates offered by this author are remarkable and similar to those offered by other authors like Shimada 30 . He proposed a method that combines a new LBP based approach and local intensity histograms.…”
Section: Discussionsupporting
confidence: 87%
“…These authors contend that the training set plays almost as important role as the detection method applied, especially in terms of variability and size, and asserts that a normal training phase may handle thousands of images. More recently, works using public databases have been proposed by Huang 17 , Yadappanavar 39 and Shimada 30 . These works as well as two commercial systems are discussed in section 7.…”
Section: State Of the Artmentioning
confidence: 99%
“…Since Bartlett et al (2003), many studies have used classifier decision values to estimate expression intensity (Bartlett et al, 2006a,b, Littlewort et al, 2006, Reilly et al, 2006, Koelstra and Pantic, 2008, Whitehill et al, 2009, Yang et al, 2009, Savran et al, 2011, Shimada et al, 2011. However, only a few of them have quantitatively evaluated their performance by comparing their estimations to manual (i.e., "ground truth") coding.…”
Section: Previous Workmentioning
confidence: 99%
“…Automatic smile detection has been already addressed considering different issues and exploring various dimensions (see for example (Whitehill, Littlewort, Fasel, Bartlett, & Movellan, 2009)). The proposed solutions vary indeed whether one wants to detect the presence or the absence of smile (An, Yang, & Bhanu, 2015;Chen, Ou, Chi, & Fu, 2017;Guo, Polania, & Barner, 2018;Shan, 2012;Zhang, Huang, Wu, & Wang, 2015) or rather one wants to estimate smile intensity (Bartlett, Littlewort, Braathen, Sejnowski, & Movellan, 2003;Bartlett et al, 2006;Girard, Cohn, & De la Torre, 2015;Jiang, Coskun, Badokhon, Liu, & Huang, 2019;Shimada, Matsukawa, Noguchi, & Kurita, 2010;Vinola & Vimala Devi, 2019). The methods applied also change if one is interested in classifying single face image (An et al, 2015;Chen et al, 2017;Guo et al, 2018;Jiang et al, 2019;Shan, 2012;Shimada et al, 2010;Zhang et al, 2015) rather than proposing a dynamical annotation of a video recording (Freire-Obregón & Castrillón-Santana, 2015).…”
Section: Introductionmentioning
confidence: 99%