2020
DOI: 10.1007/s11042-020-10128-9
|View full text |Cite
|
Sign up to set email alerts
|

Decoding depressive disorder using computer vision

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(4 citation statements)
references
References 82 publications
0
4
0
Order By: Relevance
“…Data from 50 subjects were used to train the model and the result showed that their framework improved depression prediction performance with 80%, 78% and 72% accuracies on the EGG, speech, and facial data, respectively. Singh and Goyal [ 24 ] attempted to decode depressive disorder using computer vision. A questionnaire [ 33 ] on Attention Deficit Hyperactivity Disorder (ADHD) was administered to a total of 401 volunteers.…”
Section: Related Workmentioning
confidence: 99%
“…Data from 50 subjects were used to train the model and the result showed that their framework improved depression prediction performance with 80%, 78% and 72% accuracies on the EGG, speech, and facial data, respectively. Singh and Goyal [ 24 ] attempted to decode depressive disorder using computer vision. A questionnaire [ 33 ] on Attention Deficit Hyperactivity Disorder (ADHD) was administered to a total of 401 volunteers.…”
Section: Related Workmentioning
confidence: 99%
“…In addition to being used for the education domain, AEMS can also be used in many other domains like entertainment (Wang, S., & Ji,Q. 2015), healthcare (Singh & Goyal, 2021), shopping (Yolcu et al, 2020) and more. Since AEMS can be used in various fields, each field needs to redesign a different set of contextual features according to the engagement dimensions to obtain better predictions.…”
Section: Significance Of Aemsmentioning
confidence: 99%
“…• Speech: para-verbals features-e.g., speed, silences, pauses [3,5,[8][9][10]-as well as non-verbal features [17][18][19][20] in read and spontaneous speeches; • Handwriting and drawing [4,6,8] mainly focusing on the shape of the drawn lines; • Video analysis: face expressions [21], eye movements [22]; • Content of written and spoken words [23][24][25][26]; • Electroencephalogram (EEG) [27][28][29][30][31]; • Multimodality: more than a source of data is used and combined to improve detection performance [32,33].…”
Section: Related Workmentioning
confidence: 99%