2022
DOI: 10.2196/34333
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Assessment of Emotion Dysregulation in American, French, and Tunisian Adults and New Developments in Deep Multimodal Fusion: Cross-sectional Study

Abstract: Background Emotion dysregulation is a key dimension of adult psychological functioning. There is an interest in developing a computer-based, multimodal, and automatic measure. Objective We wanted to train a deep multimodal fusion model to estimate emotion dysregulation in adults based on their responses to the Multimodal Developmental Profile, a computer-based psychometric test, using only a small training sample and without transfer learning. … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 35 publications
0
3
0
Order By: Relevance
“…302 The success of transformer networks in jointly modeling video, speech, and language data has catalyzed multi-modal modeling in mental health. 303 Multi-modal modeling techniques can also be used in characterizing symptoms such as emotion dysregulation, 304 loneliness, 305 and sentiment analysis. 306 In a prognostic study, an SVM-based multi-modal ML approach was developed to integrate clinical, neurocognitive, neuroimaging, and genetic information to predict psychosis in patients with clinical high-risk states.…”
Section: Multi-modal Data Fusion In Diagnostic Analyticsmentioning
confidence: 99%
“…302 The success of transformer networks in jointly modeling video, speech, and language data has catalyzed multi-modal modeling in mental health. 303 Multi-modal modeling techniques can also be used in characterizing symptoms such as emotion dysregulation, 304 loneliness, 305 and sentiment analysis. 306 In a prognostic study, an SVM-based multi-modal ML approach was developed to integrate clinical, neurocognitive, neuroimaging, and genetic information to predict psychosis in patients with clinical high-risk states.…”
Section: Multi-modal Data Fusion In Diagnostic Analyticsmentioning
confidence: 99%
“…The success of transformer networks in jointly modeling video, speech, and language data has catalyzed multimodal modeling in mental health [295]. Multimodal modeling techniques can also be used in modeling symptoms such as emotion dysregulation [296], loneliness [297] and sentiment analysis [298]. In a prognostic study, an SVM-based multimodal ML approach was developed to integrate clinical, neurocognitive, neuroimaging, and genetic information to predict psychosis in patients with clinical high-risk states [76].…”
Section: Multimodal Fusion Of Non-imaging Datamentioning
confidence: 99%
“…The success of transformer networks in jointly modeling video, speech, and language data has catalyzed multimodal modeling in mental health [296]. Multimodal modeling techniques can also be used in modeling symptoms such as emotion dysregulation [297], loneliness [298] and sentiment analysis [299]. In a prognostic study, an SVM-based multimodal ML approach was developed to integrate clinical, neurocognitive, neuroimaging, and genetic information to predict psychosis in patients with clinical high-risk states [76].…”
Section: Multimodal Fusion Of Non-imaging Datamentioning
confidence: 99%