Proceedings of the 4th International Workshop on Audio/Visual Emotion Challenge 2014
DOI: 10.1145/2661806.2661810
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal Prediction of Affective Dimensions and Depression in Human-Computer Interactions

Abstract: Depression is one of the most common mood disorders. Technology has the potential to assist in screening and treating people with depression by robustly modeling and tracking the complex behavioral cues associated with the disorder (e.g., speech, language, facial expressions, head movement, body language). Similarly, robust affect recognition is another challenge which stands to benefit from modeling such cues. The Audio/Visual Emotion Challenge (AVEC) aims toward understanding the two phenomena and modeling t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
55
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 90 publications
(56 citation statements)
references
References 31 publications
1
55
0
Order By: Relevance
“…It was also compared with all the stateof-the-art methods in the AVEC2014 affect recognition subchallenge with fairly good performance. NLPR [4], SAIL [9], BU-CMPE [11] and our method achieve better performance than baseline method, while Ulm [10] achieves best performance. However, it utilized extra information on subjects and annotation process that is not comparable with other methods.…”
Section: Conclusion and Discussionmentioning
confidence: 85%
See 2 more Smart Citations
“…It was also compared with all the stateof-the-art methods in the AVEC2014 affect recognition subchallenge with fairly good performance. NLPR [4], SAIL [9], BU-CMPE [11] and our method achieve better performance than baseline method, while Ulm [10] achieves best performance. However, it utilized extra information on subjects and annotation process that is not comparable with other methods.…”
Section: Conclusion and Discussionmentioning
confidence: 85%
“…The single modality regressors are then combined using particle filtering, by treating these independent regression outputs as measurements of the affective states in a Bayesian filtering framework, where previous observations provide prediction about the current state by means of learned affect dynamics. At AVEC2014 affect recognition sub-challenge, the temporal relations in naturalistic expressions was used to boost the performance in decision level filtering [10] [9]. Kachele et al [10] proposed an approach based on abstract meta information about individual subjects and also prototypical task and label dependent templates to infer the respective emotional states and achieved the best performance.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Liyanage C. De Silva, Pei Chi Ng, of Singapore used statistical techniques and Hidden Markov Models (HMM) [8] for the recognition of emotions. The method classifies six fundamental emotions namely anger, dislike, fear, happiness, sadness and surprise from facial expressions and emotional speech.…”
Section: Figure 2results Of Feature Level Fusion[3]mentioning
confidence: 99%
“…Thus, it was possible to outperform all other submitted approaches that used elaborate learning techniques such as deep neural networks (Chao et al, 2014) and support vector machines (Gupta et al, 2014) on complex audio and video features.…”
Section: Example: Avec 2014mentioning
confidence: 99%