2021
DOI: 10.1007/s12559-021-09936-4
|View full text |Cite
|
Sign up to set email alerts
|

Training Affective Computer Vision Models by Crowdsourcing Soft-Target Labels

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
9
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2
2

Relationship

3
4

Authors

Journals

citations
Cited by 18 publications
(13 citation statements)
references
References 94 publications
1
9
0
Order By: Relevance
“…Our plan is to develop the STAND app into a versatile tool that can be customized for researching and gathering computer vision data of the face to build personalized machine learning models which can support digital health interventions related to mental states and developmental disorders. This work can integrate with existing research in using automatic emotion recognition for a variety of contexts, including mental illness diagnosis, recognizing human social and physiological interactions, and developing sociable robotics and other human-computer interaction systems [57, [85][86][87][88][89]. For example, emotional expressions have a crucial role in recognizing certain types of developmental disorders.…”
Section: Discussion and Future Workmentioning
confidence: 99%
“…Our plan is to develop the STAND app into a versatile tool that can be customized for researching and gathering computer vision data of the face to build personalized machine learning models which can support digital health interventions related to mental states and developmental disorders. This work can integrate with existing research in using automatic emotion recognition for a variety of contexts, including mental illness diagnosis, recognizing human social and physiological interactions, and developing sociable robotics and other human-computer interaction systems [57, [85][86][87][88][89]. For example, emotional expressions have a crucial role in recognizing certain types of developmental disorders.…”
Section: Discussion and Future Workmentioning
confidence: 99%
“…Previous work examined the use of crowdsourced annotations for autism, indicating that similar approaches could perhaps be applied through audio [31,[46][47][48][49][50][51]. Audio feature extraction combined with other autism classifiers could be used to create an explainable diagnostic system [52][53][54][55][56][57][58][59][60][61][62][63][64] fit for mobile devices [60]. Previous work investigated using such classifiers to detect autism or approach autism-related tasks like identifying emotion to improve socialization skills; combining computer vision-based quantification of relevant areas of interest, including hand stimming [58], upper limb movement [63], and eye contact [62,64], could possibly result in interpretable models.…”
Section: Future Workmentioning
confidence: 99%
“…We considered, in particular, three different representations of ratings, which include some of the most common label representation approaches in the FER literature (Ko, 2018; Washington et al, 2021): Multi-label representation: each rater judgement is expressed in terms of the emotion(s) that they considered more intense, discarding the other less intense ones. For instance, for rater i, the list of emotions ‘surprise’ and ‘enjoyment’. Distribution-based representation: each rater judgement is represented in percentage terms of a whole constituted by all the non-absent emotions, like, for rater i the list: ‘enjoyment’: 67%’, ‘surprise’: 23%’, ‘anger’: 10%’. Ordinal representation: each rater judgement is simply expressed as the list of reported emotional intensity values between 1 and 5. Clearly, the multi-label representation corresponds to a categorical emotion model, while the ordinal and distribution-based representations take into account features of both categorical and dimensional emotion models.…”
Section: User Study: Methodsmentioning
confidence: 99%
“…We considered, in particular, three different representations of ratings, which include some of the most common label representation approaches in the FER literature (Ko, 2018;Washington et al, 2021):…”
Section: First Experiment: Inter-rater Reliabilitymentioning
confidence: 99%