Stigma has negative effects on people with mental health problems by making them less likely to seek help. We develop a proof of principle service user supervised machine learning pipeline to identify stigmatising tweets reliably and understand the prevalence of public schizophrenia stigma on Twitter. A service user group advised on the machine learning model evaluation metric (fewest false negatives) and features for machine learning. We collected 13,313 public tweets on schizophrenia between January and May 2018. Two service user researchers manually identified stigma in 746 English tweets; 80% were used to train eight models, and 20% for testing. The two models with fewest false negatives were compared in two service user validation exercises, and the best model used to classify all extracted public English tweets. Tweets classed as stigmatising by service users were more negative in sentiment (t (744) = 12.02, p < 0.001 [95% CI: 0.196–0.273]). Our linear Support Vector Machine was the best performing model with fewest false negatives and higher service user validation. This model identified public stigma in 47% of English tweets (n5,676) which were more negative in sentiment (t (12,143) = 64.38, p < 0.001 [95% CI: 0.29–0.31]). Machine learning can identify stigmatising tweets at large scale, with service user involvement. Given the prevalence of stigma, there is an urgent need for education and online campaigns to reduce it. Machine learning can provide a real time metric on their success.
Background Mental health services are turning to technology to ease the resource burden, but privacy policies are hard to understand potentially compromising consent for people with mental health problems. The FDA recommends a reading grade of 8. Objective To investigate and improve the accessibility and acceptability of mental health depression app privacy policies. Methods A mixed methods study using quantitative and qualitative data to improve the accessibility of app privacy policies. Service users completed assessments and focus groups to provide information on ways to improve privacy policy accessibility, including identifying and rewording jargon. This was supplemented by comparisons of mental health depression apps with social media, music and finance apps using readability analyses and examining whether GDPR affected accessibility. Results Service users provided a detailed framework for increasing accessibility that emphasised having critical information for consent. Quantitatively, most app privacy policies were too long and complicated for ensuring informed consent (mental health apps mean reading grade = 13.1 (SD = 2.44)). Their reading grades were no different to those for other services. Only 3 mental health apps had a grade 8 or less and 99% contained service user identified jargon. Mental health app privacy policies produced for GDPR weren't more readable and were longer. Conclusions Apps specifically aimed at people with mental health difficulties are not accessible and even those that fulfilled the FDA's recommendation for reading grade contained jargon words. Developers and designers can increase accessibility by following a few rules and should, before launching, check whether the privacy policy can be understood.
BACKGROUND Mental health services are turning to technology to ease the resource burden, but privacy policies are hard to understand potentially compromising consent for people with mental health problems. The FDA recommends a reading grade of 8. OBJECTIVE To investigate and improve the accessibility and acceptability of mental health app privacy policies. METHODS A mixed methods study using quantitative and qualitative data to improve the accessibility of app privacy policies. Service users completed assessments and focus groups to provide information on ways to improve privacy policy accessibility, including identifying and rewording jargon. This was supplemented by comparisons of mental health apps with social media, music and finance apps using readability analyses and examining whether GDPR affected accessibility. RESULTS Service users provided a detailed framework for increasing accessibility that emphasised having critical information for consent. Quantitatively, most app privacy policies were too long and complicated for ensuring informed consent (mental health apps mean reading grade = 13.1 (SD = 2.44). Their reading grades were no different to those for other services. Only 3 mental health apps had a grade 8 or less and 99% contained service user identified jargon. Mental health app privacy policies produced for GDPR weren’t more readable and were longer. CONCLUSIONS Apps specifically aimed at people with mental health difficulties are not accessible and even those that fulfilled the FDA’s recommendation for reading grade contained jargon words. Developers and designers can increase accessibility by following a few rules and should, before launching, check whether the privacy policy can be understood. CLINICALTRIAL N/A
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.