2011 Ninth Working IEEE/IFIP Conference on Software Architecture 2011
DOI: 10.1109/wicsa.2011.32
|View full text |Cite
|
Sign up to set email alerts
|

ABE: An Agent-Based Software Architecture for a Multimodal Emotion Recognition Framework

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
30
0
2

Year Published

2012
2012
2022
2022

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 37 publications
(33 citation statements)
references
References 12 publications
0
30
0
2
Order By: Relevance
“…Emotiv provides as direct output frustration and excitement in [11], long-term excitement, instantaneous excitement and engagement in [4,13] and excitement, engagement, boredom, meditation and frustration in [5] through its simple interface. In some cases information from the Emotiv sensor about the raw EEG and wave variations are combined with the information provided by other biological or non biological sensors, resulting in different types of emotion representation: anger, disgust, fear, happiness, sadness, joy, despisal, stress, concentration and excitement in [8], positive, negative and neutral state of mind in [3,20].…”
Section: Emotion Representationmentioning
confidence: 99%
See 4 more Smart Citations
“…Emotiv provides as direct output frustration and excitement in [11], long-term excitement, instantaneous excitement and engagement in [4,13] and excitement, engagement, boredom, meditation and frustration in [5] through its simple interface. In some cases information from the Emotiv sensor about the raw EEG and wave variations are combined with the information provided by other biological or non biological sensors, resulting in different types of emotion representation: anger, disgust, fear, happiness, sadness, joy, despisal, stress, concentration and excitement in [8], positive, negative and neutral state of mind in [3,20].…”
Section: Emotion Representationmentioning
confidence: 99%
“…EEG information is combined with information taken from the subject's face using visual input or observations in [2,5,8,11,17]. For the latter, "MindReader" -a system developed at MIT Media Lab -infers emotions from facial expressions and head movements in real-time.…”
Section: Multimodalitymentioning
confidence: 99%
See 3 more Smart Citations