Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology 2019
DOI: 10.18653/v1/w19-3023
|View full text |Cite
|
Sign up to set email alerts
|

An Investigation of Deep Learning Systems for Suicide Risk Assessment

Abstract: This work presents the systems explored as part of the CLPsych 2019 Shared Task. More specifically, this work explores the promise of deep learning systems for suicide risk assessment.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0
5

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 19 publications
(14 citation statements)
references
References 4 publications
0
9
0
5
Order By: Relevance
“…His findings show CNN to outperform other phenol-typing algorithms on the prediction of 10 phenotypes. Morales et al [35] showed the strength of CNN and LSTM models for a suicide risk assessment presenting the results for a novelly tested personality and tone features. Bhat et al [36] and [37] highlighted CNN's performance over other approaches to identify the presence of suicidal tendencies among adolescents.…”
Section: Background and Related Workmentioning
confidence: 99%
“…His findings show CNN to outperform other phenol-typing algorithms on the prediction of 10 phenotypes. Morales et al [35] showed the strength of CNN and LSTM models for a suicide risk assessment presenting the results for a novelly tested personality and tone features. Bhat et al [36] and [37] highlighted CNN's performance over other approaches to identify the presence of suicidal tendencies among adolescents.…”
Section: Background and Related Workmentioning
confidence: 99%
“…We leverage two type of models: 1) Convolutional Neural Network (CNN) and 2) pretrained language models, which are used in the relevant shared task (Zirikly et al, 2019). The teams that participated in the shared task demonstrated that CNN is effective for the risk classification task (Morales et al, 2019). Also, ASU (Ambalavanan et al, 2019) shows fine-tuning pre-trained language model is highly effective.…”
Section: Methodsmentioning
confidence: 99%
“…Transformers are typically trained on single messages or pairs of messages, at a time. Since we are tuning towards a human-level task, we label each user's message with their human-level attribute and treat it as a standard document-level task (Morales et al, 2019). Since we are interested in relative differences in performance, we limit each user to at most 20 messages -approximately the median number of messages, randomly sampled, to save compute time for the fine tuning experiments.…”
Section: Transformer Representationsmentioning
confidence: 99%