Background Dementia misconceptions on social media are common, with negative effects on people with the condition, their carers, and those who know them. This study codeveloped a thematic framework with carers to understand the forms these misconceptions take on Twitter. Objective The aim of this study is to identify and analyze types of dementia conversations on Twitter using participatory methods. Methods A total of 3 focus groups with dementia carers were held to develop a framework of dementia misconceptions based on their experiences. Dementia-related tweets were collected from Twitter’s official application programming interface using neutral and negative search terms defined by the literature and by carers (N=48,211). A sample of these tweets was selected with equal numbers of neutral and negative words (n=1497), which was validated in individual ratings by carers. We then used the framework to analyze, in detail, a sample of carer-rated negative tweets (n=863). Results A total of 25.94% (12,507/48,211) of our tweet corpus contained negative search terms about dementia. The carers’ framework had 3 negative and 3 neutral categories. Our thematic analysis of carer-rated negative tweets found 9 themes, including the use of weaponizing language to insult politicians (469/863, 54.3%), using dehumanizing or outdated words or statements about members of the public (n=143, 16.6%), unfounded claims about the cures or causes of dementia (n=11, 1.3%), or providing armchair diagnoses of dementia (n=21, 2.4%). Conclusions This is the first study to use participatory methods to develop a framework that identifies dementia misconceptions on Twitter. We show that misconceptions and stigmatizing language are not rare. They manifest through minimizing and underestimating language. Web-based campaigns aiming to reduce discrimination and stigma about dementia could target those who use negative vocabulary and reduce the misconceptions that are being propagated, thus improving general awareness.
Background Patient and public involvement can improve study outcomes, but little data have been collected on why this might be. We investigated the impact of the Feasibility and Support to Timely Recruitment for Research (FAST-R) service, made up of trained patients and carers who review research documents at the beginning of the research pipeline. Aims To investigate the impact of the FAST-R service, and to provide researchers with guidelines to improve study documents. Method A mixed-methods design assessing changes and suggestions in documents submitted to the FAST-R service from 2011 to 2020. Quantitative measures were readability, word count, jargon words before and after review, the effects over time and if changes were implemented. We also asked eight reviewers to blindly select a pre- or post-review participant information sheet as their preferred version. Reviewers’ comments were analysed qualitatively via thematic analysis. Results After review, documents were longer and contained less jargon, but did not improve readability. Jargon and the number of suggested changes increased over time. Participant information sheets had the most suggested changes. Reviewers wanted clarity, better presentation and felt that documents lacked key information such as remuneration, risks involved and data management. Six out of eight reviewers preferred the post-review participant information sheet. FAST-R reviewers provided jargon words and phrases with alternatives for researchers to use. Conclusions Longer documents are acceptable if they are clear, with jargon explained or substituted. The highlighted barriers to true informed consent are not decreasing, although this study has suggestions for improving research document accessibility.
Background As the number of mental health apps has grown, increasing efforts have been focused on establishing quality tailored reviews. These reviews prioritize clinician and academic views rather than the views of those who use them, particularly those with lived experiences of mental health problems. Given that the COVID-19 pandemic has increased reliance on web-based and mobile mental health support, understanding the views of those with mental health conditions is of increasing importance. Objective This study aimed to understand the opinions of people with mental health problems on mental health apps and how they differ from established ratings by professionals. Methods A mixed methods study was conducted using a web-based survey administered between December 2020 and April 2021, assessing 11 mental health apps. We recruited individuals who had experienced mental health problems to download and use 3 apps for 3 days and complete a survey. The survey consisted of the One Mind PsyberGuide Consumer Review Questionnaire and 2 items from the Mobile App Rating Scale (star and recommendation ratings from 1 to 5). The consumer review questionnaire contained a series of open-ended questions, which were thematically analyzed and using a predefined protocol, converted into binary (positive or negative) ratings, and compared with app ratings by professionals and star ratings from app stores. Results We found low agreement between the participants’ and professionals’ ratings. More than half of the app ratings showed disagreement between participants and professionals (198/372, 53.2%). Compared with participants, professionals gave the apps higher star ratings (3.58 vs 4.56) and were more likely to recommend the apps to others (3.44 vs 4.39). Participants’ star ratings were weakly positively correlated with app store ratings (r=0.32, P=.01). Thematic analysis found 11 themes, including issues of user experience, ease of use and interactivity, privacy concerns, customization, and integration with daily life. Participants particularly valued certain aspects of mental health apps, which appear to be overlooked by professional reviewers. These included functions such as the ability to track and measure mental health and providing general mental health education. The cost of apps was among the most important factors for participants. Although this is already considered by professionals, this information is not always easily accessible. Conclusions As reviews on app stores and by professionals differ from those by people with lived experiences of mental health problems, these alone are not sufficient to provide people with mental health problems with the information they desire when choosing a mental health app. App rating measures must include the perspectives of mental health service users to ensure ratings represent their priorities. Additional work should be done to incorporate the features most important to mental health service users into mental health apps.
Background: Mental health stigma on social media is well studied, but not from the perspective of mental health service users. Coronavirus disease-19 (COVID-19) increased mental health discussions and may have impacted stigma. Objectives: (1) to understand how service users perceive and define mental health stigma on social media; (2) how COVID-19 shaped mental health conversations and social media use. Methods: We collected 2,700 tweets related to seven mental health conditions: schizophrenia, depression, anxiety, autism, eating disorders, OCD, and addiction. Twenty-seven service users rated them as stigmatising or neutral, followed by focus group discussions. Focus group transcripts were thematically analysed. Results: Participants rated 1,101 tweets (40.8%) as stigmatising. Tweets related to schizophrenia were most frequently classed as stigmatising (411/534, 77%). Tweets related to depression or anxiety were least stigmatising (139/634, 21.9%). A stigmatising tweet depended on perceived intention and context but some words (e.g. “psycho”) felt stigmatising irrespective of context. Discussion: The anonymity of social media seemingly increased stigma, but COVID-19 lockdowns improved mental health literacy. This is the first study to qualitatively investigate service users' views of stigma towards various mental health conditions on Twitter and we show stigma is common, particularly towards schizophrenia. Service user involvement is vital when designing solutions to stigma.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.