2021
DOI: 10.1007/s12559-021-09901-1
|View full text |Cite
|
Sign up to set email alerts
|

Applying Attention-Based Models for Detecting Cognitive Processes and Mental Health Conditions

Abstract: According to the psychological literature, implicit motives allow for the characterization of behavior, subsequent success, and long-term development. Contrary to personality traits, implicit motives are often deemed to be rather stable personality characteristics. Normally, implicit motives are obtained by Operant Motives, unconscious intrinsic desires measured by the Operant Motive Test (OMT). The OMT test requires participants to write freely descriptions associated with a set of provided images and questio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 48 publications
0
3
0
Order By: Relevance
“…A work based on self-attentional architectures described the use of such NLP models to classify 209000 texts from the GermEval 2020 task [12]. Pieces of free texts describing a person situation, feelings and actions from simple drawings of the Operant Motive Test [13] were classified by trained psychologists into five possible motives and rated into six possible levels.…”
Section: Supervised Learning To Analyze Free Texts' Contentsmentioning
confidence: 99%
See 2 more Smart Citations
“…A work based on self-attentional architectures described the use of such NLP models to classify 209000 texts from the GermEval 2020 task [12]. Pieces of free texts describing a person situation, feelings and actions from simple drawings of the Operant Motive Test [13] were classified by trained psychologists into five possible motives and rated into six possible levels.…”
Section: Supervised Learning To Analyze Free Texts' Contentsmentioning
confidence: 99%
“…Similarly, a close examination of BERT's attentional weights in the aforementioned GermEval classification task shows that the transformer pays more attention to form features (i.e. use of personal pronouns, stop words, negation, punctuation marks, unknown words, and some conjugation styles) than to content words [12]. While interpretability of attentional weights proves difficult, other authors have conducted a layer by layer examination of the structure of transformers by probing the corresponding hidden outputs.…”
Section: Supervised Learning To Analyze Free Texts' Contentsmentioning
confidence: 99%
See 1 more Smart Citation