2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) 2017
DOI: 10.1109/bibm.2017.8217968
|View full text |Cite
|
Sign up to set email alerts
|

Continuous affect prediction using eye gaze and speech

Abstract: Affective computing research traditionally focused on labeling a person's emotion as one of a discrete number of classes e.g. happy or sad. In recent times, more attention has been given to continuous affect prediction across dimensions in the emotional space, e.g. arousal and valence. Continuous affect prediction is the task of predicting a numerical value for different emotion dimensions. The application of continuous affect prediction is powerful in domains involving real-time audio-visual communications wh… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 22 publications
0
2
0
Order By: Relevance
“…From the results in [22] the bimodal fusion of EEG and eye-based features performed best overall (arousal = 67.7%, valence = 76.1%). Eye gaze was combined with speech in [28], where a feature set similar to that of [22] was used. Additional statistics were gathered for eye scan paths and eye closure features were measured by frame counts instead of time.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…From the results in [22] the bimodal fusion of EEG and eye-based features performed best overall (arousal = 67.7%, valence = 76.1%). Eye gaze was combined with speech in [28], where a feature set similar to that of [22] was used. Additional statistics were gathered for eye scan paths and eye closure features were measured by frame counts instead of time.…”
Section: Related Workmentioning
confidence: 99%
“…Additional statistics were gathered for eye scan paths and eye closure features were measured by frame counts instead of time. Results achieved in [28] showed that eye gaze, when combined with speech as part of a feature fusion, single support vector regression system, could improve arousal prediction compared to that of unimodal speech (3.5% relative performance improvement), while model fusion improved valence prediction compared to unimodal speech (19.5% rela-tive performance improvement). Psychopathological affective computing work incorporating eye-based features as part of multimodal approaches include post traumatic stress disorder estimation [29] and depression recognition [30], [31].…”
Section: Related Workmentioning
confidence: 99%