2014
DOI: 10.1007/978-3-319-07983-7_36
|View full text |Cite
|
Sign up to set email alerts
|

Mining Twitter for Suicide Prevention

Abstract: Abstract. Automatically detect suicidal people in social networks is a real social issue. In France, suicide attempt is an economic burden with strong socio-economic consequences. In this paper, we describe a complete process to automatically collect suspect tweets according to a vocabulary of topics suicidal persons are used to talk. We automatically capture tweets indicating suicidal risky behaviour based on simple classification methods. An interface for psychiatrists has been implemented to enable them to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
37
0
1

Year Published

2015
2015
2021
2021

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 57 publications
(38 citation statements)
references
References 1 publication
0
37
0
1
Order By: Relevance
“…This dataset, which may be considered as a longitudinal data, was collected from 10 unique real users that demonstrated a serious change in speech or online behavior in their Twitter accounts. Two initial validation cases, which ended with the individual committing suicide, were previously identified by [40]. We were able to identify a third fatal case and an additional seven other cases where the individuals demonstrated an abrupt change in behavior.…”
Section: Experimental Evaluation Data Collection and Annotationmentioning
confidence: 93%
“…This dataset, which may be considered as a longitudinal data, was collected from 10 unique real users that demonstrated a serious change in speech or online behavior in their Twitter accounts. Two initial validation cases, which ended with the individual committing suicide, were previously identified by [40]. We were able to identify a third fatal case and an additional seven other cases where the individuals demonstrated an abrupt change in behavior.…”
Section: Experimental Evaluation Data Collection and Annotationmentioning
confidence: 93%
“…Nosúltimos anos, diversos trabalhos investigaram a possibilidade de criar modelos para a classificação de conteúdo relacionado ao suicídio nas redes sociais. Em [Abboute et al 2014], foi descrito um processo completo para, automaticamente, coletar tweets suspeitos de acordo com um vocabulário de termos, criado pelos próprios autores, que pessoas suicidas costumam utilizar. Com a obtenção de um corpus que também incluía casos comprovados, os tweets foram classificados em "risky" e "non risky".…”
Section: Trabalhos Relacionadosunclassified
“…The variation here could suggest the flippant use of such phrases on social media when having a bad day -hence the additional challenges posed to classification of suicidal ideation on social media. Finally, [1] used machine learning to classify 'risky' and 'non risky' Tweets, as defined by human annotators, with an accuracy of around 60%. They created word lists to represent a number of topics and emotions related to suicide, finding references to insults, hurt and bullying in the 'risky' category.…”
Section: Related Workmentioning
confidence: 99%
“…These were incorporated because of the particularly emotive nature of the task. Emotions such as fear, anger and general aggressiveness are particularly prominent in suicidal communication [1] • Features representing ideosyncratic language expressed in short, informal text such as social media posts within a limited number of characters. These were extracted from the annotated Tumblr posts we collected to try and incorporate the language used on social media that may not be identified using standard text mining features.…”
Section: Feature Preparationmentioning
confidence: 99%
See 1 more Smart Citation