2022
DOI: 10.1038/s41537-021-00197-6
|View full text |Cite
|
Sign up to set email alerts
|

Identifying schizophrenia stigma on Twitter: a proof of principle model using service user supervised machine learning

Abstract: Stigma has negative effects on people with mental health problems by making them less likely to seek help. We develop a proof of principle service user supervised machine learning pipeline to identify stigmatising tweets reliably and understand the prevalence of public schizophrenia stigma on Twitter. A service user group advised on the machine learning model evaluation metric (fewest false negatives) and features for machine learning. We collected 13,313 public tweets on schizophrenia between January and May … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 17 publications
(8 citation statements)
references
References 30 publications
0
8
0
Order By: Relevance
“…This serves as further testing for our ML model on a set of tweets independent from those used for training, which is commonly used in validating ML models [45,46]. Our past research has shown that this additional step can help clarify a small difference in accuracies and demonstrate a clearer difference in performance between top-performing models, confirming that this is an important step [31]. We randomly selected 150 tweets from our second sample of 96,356 tweets and split them into 3 batches of 50.…”
Section: Blind Validationmentioning
confidence: 88%
See 1 more Smart Citation
“…This serves as further testing for our ML model on a set of tweets independent from those used for training, which is commonly used in validating ML models [45,46]. Our past research has shown that this additional step can help clarify a small difference in accuracies and demonstrate a clearer difference in performance between top-performing models, confirming that this is an important step [31]. We randomly selected 150 tweets from our second sample of 96,356 tweets and split them into 3 batches of 50.…”
Section: Blind Validationmentioning
confidence: 88%
“…In accordance with the literature, tweets were preprocessed, which involved them being lemmatized first [ 31 ]. This ensured the words in the tweets were in their stem form (eg, “depression,” “depressed,” and “depressing,” would all be converted into “depress”); this removed typos and focused on the meaning of words.…”
Section: Methodsmentioning
confidence: 99%
“…To extract these tweets, the team used keywords relating to seven of the most stigmatised conditions: Schizophrenia, Depression, Anxiety, Autism, Eating Disorders, obsessive compulsive disorder (OCD) (Robinson et al., 2019 ) and Addiction (Matthews et al., 2017 ). Search terms were based on those used previously in similar work (Robinson et al., 2019 ; Jilka et al., 2022 ; see supplementary material for keywords). We collected tweets during UK office hours (9 am–5 pm) spanning various pandemic stages: pre-UK lockdown (1st January–22nd March 2020), the first period of lockdown (23rd March–30th April 2020), and changing of lockdown rules (1st May–31st December 2020).…”
Section: Methodsmentioning
confidence: 99%
“…Sentiment analysis describes the affective or emotional tone presented in the text [ 33 ] based on psychological evidence of the emotional meaning of constituent words or phrases [ 34 , 35 ]. It has been used in several health-related cases, such as in detecting language associated with depressive symptoms [ 36 , 37 ], extracting opinions on health care–related topics [ 38 ], and identifying mental health stigma in social media data [ 39 ]. The score derived from this analysis identifies text with positive, neutral, or negative tones on a continuous scale, where scores closer to −1 are very negative, scores closer to +1 are very positive, and a score of 0 is neutral.…”
Section: Methodsmentioning
confidence: 99%