BACKGROUND
Mental health apps raise critical questions about privacy. Personal mental health information, if collected and distributed inappropriately through these apps, can threaten employment, social and political opportunities, personal relationships, and contribute to social isolation and other detrimental effects. Given this context, examination of privacy in mental health apps is warranted.
OBJECTIVE
This study traces the trajectory of privacy language in scholarship about mental health apps since their arrival in the marketplace in 2008 and offers a critical, meta-analysis of this body of empirical research.
METHODS
Articles for analysis (N=136) were drawn from a comprehensive search of over 340 academic databases for peer-reviewed journal articles from 2008 to June 2019. Qualitative thematic analysis charts the development of privacy discourse on mental health apps.
RESULTS
The concept of privacy is under-theorized in mental health app research. The study characterizes the development of privacy language in three phases: Phase One: Discourse of Technological Possibility, Phase Two: Discourse of Privacy Challenges and Threats, and Phase Three: Discourse of Advocacy. Results show a developing acknowledgement of privacy concerns culminating in strategic approaches to protecting privacy by addressing security issues such as risk mitigation, privacy certification, and advocacy for mental health app regulation.
CONCLUSIONS
Findings show that mental health apps pose unique and fundamental concerns about privacy. The meta-analysis of this body of scholarship demonstrates that digital privacy in mental health apps requires substantive security measures to protect users from harm.
CLINICALTRIAL
n/a