Mobile mental health applications are seen as a promising way to fulfill the growing need for mental health care. Although there are more than ten thousand mental health apps available on app marketplaces, such as Google Play and Apple App Store, many of them are not evidence-based, or have been minimally evaluated or regulated. The real-life experience and concerns of the app users are largely unknown. To address this knowledge gap, we analyzed 2159 user reviews from 117 Android apps and 2764 user reviews from 76 iOS apps. Our findings include the critiques around inconsistent moderation standards and lack of transparency. App-embedded social features and chatbots were criticized for providing little support during crises. We provide research and design implications for future mental health app developers, discuss the necessity of developing a comprehensive and centralized app development guideline, and the opportunities of incorporating existing AI technology in mental health chatbots.
Background
Chatbots are an emerging technology that show potential for mental health care apps to enable effective and practical evidence-based therapies. As this technology is still relatively new, little is known about recently developed apps and their characteristics and effectiveness.
Objective
In this study, we aimed to provide an overview of the commercially available popular mental health chatbots and how they are perceived by users.
Methods
We conducted an exploratory observation of 10 apps that offer support and treatment for a variety of mental health concerns with a built-in chatbot feature and qualitatively analyzed 3621 consumer reviews from the Google Play Store and 2624 consumer reviews from the Apple App Store.
Results
We found that although chatbots’ personalized, humanlike interactions were positively received by users, improper responses and assumptions about the personalities of users led to a loss of interest. As chatbots are always accessible and convenient, users can become overly attached to them and prefer them over interacting with friends and family. Furthermore, a chatbot may offer crisis care whenever the user needs it because of its 24/7 availability, but even recently developed chatbots lack the understanding of properly identifying a crisis. Chatbots considered in this study fostered a judgment-free environment and helped users feel more comfortable sharing sensitive information.
Conclusions
Our findings suggest that chatbots have great potential to offer social and psychological support in situations where real-world human interaction, such as connecting to friends or family members or seeking professional support, is not preferred or possible to achieve. However, there are several restrictions and limitations that these chatbots must establish according to the level of service they offer. Too much reliance on technology can pose risks, such as isolation and insufficient assistance during times of crisis. Recommendations for customization and balanced persuasion to inform the design of effective chatbots for mental health support have been outlined based on the insights of our findings.
How do individuals in twelve-step fellowships like Alcoholics Anonymous (AA) and Narcotics Anonymous (NA) interpret and enact "anonymity?" In this paper, we answer this question through a mixed-methods investigation. Through secondary analysis of interview data from 26 participants and an online questionnaire (N=285) we found three major interpretations of anonymity among AA and NA members: "unidentifiability," "social contract," and "program over individual." While unidentifiability has been the focus of computing investigations, the other interpretations provide a significant and novel lens on anonymity. To understand how and when the unidentifiability interpretation was most likely to be enacted, we conducted a quantitative analysis of traces of activity in a large online recovery community. We observed that members were less likely to enact "unidentifiability" if they were more connected to the particular community and had more time in recovery. We provide implications for future research on context-specific anonymity and implications for design in online recovery spaces and similar sensitive contexts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.