2020
DOI: 10.48550/arxiv.2002.00883
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adversarial-based neural networks for affect estimations in the wild

Abstract: There is a growing interest in affective computing research nowadays given its crucial role in bridging humans with computers. This progress has been recently accelerated due to the emergence of bigger data. One recent advance in this field is the use of adversarial learning to improve model learning through augmented samples. However, the use of latent features, which is feasible through adversarial learning, is not largely explored, yet. This technique may also improve the performance of affective models, as… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 32 publications
0
4
0
Order By: Relevance
“…The EmoFAN deep learning model [1] jointly predicted discrete emotional states and continuous affect dimensions by building upon the face alignment network proposed in [48], thereby achieving the best performance on the AfewVA dataset [49]. Aspandi et al [50] estimated affect in the wild by exploiting adversarial neural networks that build high-level representations of audiovisual information. Parthasarathy and Sundaram [51] demonstrated that multimodal deep learning affect models can significantly improve affect detection in the wild.…”
Section: Affect Modeling In the Wildmentioning
confidence: 99%
“…The EmoFAN deep learning model [1] jointly predicted discrete emotional states and continuous affect dimensions by building upon the face alignment network proposed in [48], thereby achieving the best performance on the AfewVA dataset [49]. Aspandi et al [50] estimated affect in the wild by exploiting adversarial neural networks that build high-level representations of audiovisual information. Parthasarathy and Sundaram [51] demonstrated that multimodal deep learning affect models can significantly improve affect detection in the wild.…”
Section: Affect Modeling In the Wildmentioning
confidence: 99%
“…EmoFAN achieves the best performance on the AfewVA dataset [31]. Aspandi et al [32] use adversarial-based neural networks to learn latent representations from audiovisual signals for estimating affect in the wild. The authors of [33] use multimodal transformers to capture and exploit temporal dynamics of audiovisual information towards detecting affect states.…”
Section: B Affect Modeling In the Wildmentioning
confidence: 99%
“…In the literature, estimation of activation, valence and dominance attributes is referred as Continuous Emotion Recognition (CER). Although CER is widely studied by the speech processing community [6], [7], [8], it has been studied over the visual channels [9], [10], [11]. In [9], a deep attention based convolutional network is proposed to detect facial expressions and emotions.…”
Section: Introductionmentioning
confidence: 99%
“…In [9], a deep attention based convolutional network is proposed to detect facial expressions and emotions. Aspandi et al [10] proposed an adversarial training approach to jointly estimate whether the image is fake or not while estimating the activation and valence (AV) attributes. In [11], VGG-16 driven visual features are used as the input of a stacked convolutional recurrent neural network (CRNN) for affect recognition in the wild.…”
Section: Introductionmentioning
confidence: 99%