Background Dementia misconceptions on social media are common, with negative effects on people with the condition, their carers, and those who know them. This study codeveloped a thematic framework with carers to understand the forms these misconceptions take on Twitter. Objective The aim of this study is to identify and analyze types of dementia conversations on Twitter using participatory methods. Methods A total of 3 focus groups with dementia carers were held to develop a framework of dementia misconceptions based on their experiences. Dementia-related tweets were collected from Twitter’s official application programming interface using neutral and negative search terms defined by the literature and by carers (N=48,211). A sample of these tweets was selected with equal numbers of neutral and negative words (n=1497), which was validated in individual ratings by carers. We then used the framework to analyze, in detail, a sample of carer-rated negative tweets (n=863). Results A total of 25.94% (12,507/48,211) of our tweet corpus contained negative search terms about dementia. The carers’ framework had 3 negative and 3 neutral categories. Our thematic analysis of carer-rated negative tweets found 9 themes, including the use of weaponizing language to insult politicians (469/863, 54.3%), using dehumanizing or outdated words or statements about members of the public (n=143, 16.6%), unfounded claims about the cures or causes of dementia (n=11, 1.3%), or providing armchair diagnoses of dementia (n=21, 2.4%). Conclusions This is the first study to use participatory methods to develop a framework that identifies dementia misconceptions on Twitter. We show that misconceptions and stigmatizing language are not rare. They manifest through minimizing and underestimating language. Web-based campaigns aiming to reduce discrimination and stigma about dementia could target those who use negative vocabulary and reduce the misconceptions that are being propagated, thus improving general awareness.
Stigma has negative effects on people with mental health problems by making them less likely to seek help. We develop a proof of principle service user supervised machine learning pipeline to identify stigmatising tweets reliably and understand the prevalence of public schizophrenia stigma on Twitter. A service user group advised on the machine learning model evaluation metric (fewest false negatives) and features for machine learning. We collected 13,313 public tweets on schizophrenia between January and May 2018. Two service user researchers manually identified stigma in 746 English tweets; 80% were used to train eight models, and 20% for testing. The two models with fewest false negatives were compared in two service user validation exercises, and the best model used to classify all extracted public English tweets. Tweets classed as stigmatising by service users were more negative in sentiment (t (744) = 12.02, p < 0.001 [95% CI: 0.196–0.273]). Our linear Support Vector Machine was the best performing model with fewest false negatives and higher service user validation. This model identified public stigma in 47% of English tweets (n5,676) which were more negative in sentiment (t (12,143) = 64.38, p < 0.001 [95% CI: 0.29–0.31]). Machine learning can identify stigmatising tweets at large scale, with service user involvement. Given the prevalence of stigma, there is an urgent need for education and online campaigns to reduce it. Machine learning can provide a real time metric on their success.
Background Mental health services are turning to technology to ease the resource burden, but privacy policies are hard to understand potentially compromising consent for people with mental health problems. The FDA recommends a reading grade of 8. Objective To investigate and improve the accessibility and acceptability of mental health depression app privacy policies. Methods A mixed methods study using quantitative and qualitative data to improve the accessibility of app privacy policies. Service users completed assessments and focus groups to provide information on ways to improve privacy policy accessibility, including identifying and rewording jargon. This was supplemented by comparisons of mental health depression apps with social media, music and finance apps using readability analyses and examining whether GDPR affected accessibility. Results Service users provided a detailed framework for increasing accessibility that emphasised having critical information for consent. Quantitatively, most app privacy policies were too long and complicated for ensuring informed consent (mental health apps mean reading grade = 13.1 (SD = 2.44)). Their reading grades were no different to those for other services. Only 3 mental health apps had a grade 8 or less and 99% contained service user identified jargon. Mental health app privacy policies produced for GDPR weren't more readable and were longer. Conclusions Apps specifically aimed at people with mental health difficulties are not accessible and even those that fulfilled the FDA's recommendation for reading grade contained jargon words. Developers and designers can increase accessibility by following a few rules and should, before launching, check whether the privacy policy can be understood.
Background: Mental health stigma on social media is well studied, but not from the perspective of mental health service users. Coronavirus disease-19 (COVID-19) increased mental health discussions and may have impacted stigma. Objectives: (1) to understand how service users perceive and define mental health stigma on social media; (2) how COVID-19 shaped mental health conversations and social media use. Methods: We collected 2,700 tweets related to seven mental health conditions: schizophrenia, depression, anxiety, autism, eating disorders, OCD, and addiction. Twenty-seven service users rated them as stigmatising or neutral, followed by focus group discussions. Focus group transcripts were thematically analysed. Results: Participants rated 1,101 tweets (40.8%) as stigmatising. Tweets related to schizophrenia were most frequently classed as stigmatising (411/534, 77%). Tweets related to depression or anxiety were least stigmatising (139/634, 21.9%). A stigmatising tweet depended on perceived intention and context but some words (e.g. “psycho”) felt stigmatising irrespective of context. Discussion: The anonymity of social media seemingly increased stigma, but COVID-19 lockdowns improved mental health literacy. This is the first study to qualitatively investigate service users' views of stigma towards various mental health conditions on Twitter and we show stigma is common, particularly towards schizophrenia. Service user involvement is vital when designing solutions to stigma.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.