2013 Humaine Association Conference on Affective Computing and Intelligent Interaction 2013
DOI: 10.1109/acii.2013.28
|View full text |Cite
|
Sign up to set email alerts
|

Audiovisual Detection of Laughter in Human-Machine Interaction

Abstract: Abstract-Laughter is clearly an audiovisual event, consisting of the laughter vocalization and of facial activity, mainly around the mouth and sometimes in the upper face. However, past research on laughter recognition has mainly focused on the information available in the audio channel only, mainly due to the lack of suitable audiovisual data. Only recently few works have been published which combine audio and visual information and most of them deal with the problem of discriminating laughter from speech or … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
3
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 19 publications
0
3
0
Order By: Relevance
“…Experiments on the automatic recognition of laughter from the audiovisual data contained in the SEMAINE-SAL database have been previously reported on a lower amount of instances [35]. Data were partitioned into speaker independent training, validation and test partitions.…”
Section: Related Workmentioning
confidence: 99%
“…Experiments on the automatic recognition of laughter from the audiovisual data contained in the SEMAINE-SAL database have been previously reported on a lower amount of instances [35]. Data were partitioned into speaker independent training, validation and test partitions.…”
Section: Related Workmentioning
confidence: 99%
“…Work on automatic recognition of laughter has also started to emerge but, as with the synthesis of laughter, has mostly focused on the acoustic modality e.g., [29], [30], [31], [32], [33], [34] and more recently on the combination of face and voice cues [35], [36], [37]. Fukushima et al used electromyographic sensors to measure diaphragmatic activity, which drives laughter vocalisations, to detect laughter in people watching television [38].…”
Section: Synthesis and Recognition Of Laughtermentioning
confidence: 99%
“…In literature, behavior forecasting works mainly consider data at two representations with an increasing level of abstraction: low-level cues or features that are extracted manually or automatically from raw audiovisual data, and manually labeled high-order events or actions. The forecasting task has primarily been formulated to predict future event or action labels from observed cues or other high-order event or action labels [5,6,[9][10][11][12][13]. Moreover, identifying patterns predictive of certain semantic events has been a long-standing topic of focus in the social sciences, where researchers primarily employ a top-down workflow.…”
Section: Introductionmentioning
confidence: 99%
“…exploratory or confirmatory analysis [14,15]. Examples of such semantic events include speaker turn transitions [5,6], mimicry episodes [13], the termination of an interaction [9,10], or high-order social actions [11,12].…”
Section: Introductionmentioning
confidence: 99%