2011 Third International Conference on Knowledge and Systems Engineering 2011
DOI: 10.1109/kse.2011.49
|View full text |Cite
|
Sign up to set email alerts
|

Audiovisual Affect Recognition in Spontaneous Filipino Laughter

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 9 publications
0
3
0
Order By: Relevance
“…A study on detecting emotions in Filipino laughter found that Multilayer Perceptron (MLP) yielded a higher correct classification rate (at 44%) compared with using SVM (18%) [ 73 ]. MLP considers the weights within a network to select features, and may be better suited for audio datasets, while SVM may perform better for video in cases where multimodal information is available [ 74 ]. SVM has also been used to classify laughter as polite or mirthful for a Japanese, Chinese and English dataset with at least 85% accuracy [ 75 ].…”
Section: Computational Approachesmentioning
confidence: 99%
“…A study on detecting emotions in Filipino laughter found that Multilayer Perceptron (MLP) yielded a higher correct classification rate (at 44%) compared with using SVM (18%) [ 73 ]. MLP considers the weights within a network to select features, and may be better suited for audio datasets, while SVM may perform better for video in cases where multimodal information is available [ 74 ]. SVM has also been used to classify laughter as polite or mirthful for a Japanese, Chinese and English dataset with at least 85% accuracy [ 75 ].…”
Section: Computational Approachesmentioning
confidence: 99%
“…Moreover, the study in [24] went further in trying to characterize different types of laughter. They investigated automatic discrimination of five types of acted laughter: happiness, giddiness, excitement, embarrassment and hurtful.…”
Section: Introductionmentioning
confidence: 99%
“…There has been growing evidence supporting the possibility of automatically discriminating between different emotions from various modalities: acoustics [40], facial expressions [41] and body movements [42], [43], [44], [45], [46], [47]. Galvan et al [48] investigated automatic discrimination of five types of acted laughter: happiness, giddiness, excitement, embarrassment and hurtful. Actors were asked to enact these five emotions using both vocal and facial expressions while they were video-recorded.…”
Section: Synthesis and Recognition Of Laughtermentioning
confidence: 99%