2014
DOI: 10.1109/tamd.2014.2317513
|View full text |Cite
|
Sign up to set email alerts
|

The MEI Robot: Towards Using Motherese to Develop Multimodal Emotional Intelligence

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
27
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 36 publications
(27 citation statements)
references
References 54 publications
0
27
0
Order By: Relevance
“…Several of the affect-detection systems designed for social robots have used dimensional models, mainly consisting of valence and arousal scales for recognition from facial expressions [76,77,125], body language [5], voice [81,82], physiological signals [4,70,86,193], and multi-modal inputs [193,203]. A small number of systems have utilized alternative affect classification scales, such as accessibility [79], engagement [61], predictability [81], stance [90], speed regularity and extent [203], stress [19,85], anxiety [84,190], and aversion and affinity [213].…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…Several of the affect-detection systems designed for social robots have used dimensional models, mainly consisting of valence and arousal scales for recognition from facial expressions [76,77,125], body language [5], voice [81,82], physiological signals [4,70,86,193], and multi-modal inputs [193,203]. A small number of systems have utilized alternative affect classification scales, such as accessibility [79], engagement [61], predictability [81], stance [90], speed regularity and extent [203], stress [19,85], anxiety [84,190], and aversion and affinity [213].…”
Section: Discussionmentioning
confidence: 99%
“…Sensors include ECG and EMG [73,83] to record physiological signals, microphones [83,90,199,200,202,203,205] to record voice intonations, and 2D cameras to capture facial information [90,199,200,202,205], and body language information [203]. These sensors have been integrated together in order to extract many features, including heart rate [73,83], voice pitch [83,90,199,200,202,203,205], gait features [203] and facial features [90,199,200,202,205]. Future research should continue to investigate a wide range of features for all modes in order to determine which combinations of features result in the highest recognition rates during real-world interactions.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…However, many challenges need to be addressed in order to meet such a requirement (Baker et al, 2009a;Moore, 2013Moore, , 2015, not least how to evolve the complexity of voice-based interfaces from simple structured dialogs to more flexible conversational designs without confusing the user (Bernsen et al, 1998;McTear, 2004;Lopez Cozar Delgado and Araki, 2005;Phillips and Philips, 2006;Moore, 2016b). In particular, seminal work by Nass and Brave (2005) showed how attention needs to be paid to users' expectations [e.g., selecting the "gender" of a system's voice (Crowell et al, 2009)], and this has inspired work on "empathic" vocal robots (Breazeal, 2003;Fellous and Arbib, 2005;Haring et al, 2011;Eyssel et al, 2012;Lim and Okuno, 2014;Crumpton and Bethel, 2016). On the other hand, user interface experts, such as Balentine (2007), have argued that such agents should be clearly machines rather than emulations of human beings, particularly to avoid the "uncanny valley effect" (Mori, 1970), whereby mismatched perceptual cues can lead to feelings of repulsion (Moore, 2012).…”
Section: Spoken Language Systemsmentioning
confidence: 99%