BACKGROUND
The application of artificial intelligence to health and healthcare is rapidly increasing. Several studies have assessed the attitudes of health professionals but far fewer have explored perspectives of patients or the general public. Studies investigating patient perspectives have focused on somatic issues including radiology, perinatal health, and general applications. Patient feedback has been elicited in the development of specific mental health solutions, but broader perspectives towards AI for mental health have been under-explored.
OBJECTIVE
To understand public perceptions regarding potential benefits of AI, concerns, comfort with AI accomplishing various tasks, and values related to AI, all pertaining to mental health.
METHODS
We conducted a one-time cross-section survey with a nationally representative sample of 500 United States-based adults. Participants provided structured responses on their perceived benefits, concerns, comfortability, and values on AI related to mental health. They could also add free text responses to elaborate on their concerns and values.
RESULTS
A plurality of participants (49.3%) believed AI may be beneficial for mental healthcare, but this perspective differed based on socio-demographic variables (p<0.05). Specifically, Black participants (OR = 1.76) and those with lower health literacy (OR=2.16), perceived AI to be more beneficial, and females (OR=0.68) perceived AI to be less beneficial. Participants endorsed concerns related to the use of AI for mental health regarding its accuracy, possible unintended consequences such as misdiagnosis, confidentiality of their information, and loss of connection with their health professional. Over 80% of participants also valued being able to understand individual factors driving their risk, confidentiality, and autonomy as it pertained to the use of AI for their mental health. When asked about who was responsible for misdiagnosis of mental health conditions using AI, 81.6% of participants found the health professional to be responsible. Qualitative results revealed similar concerns related to the accuracy of AI and how its use may impact the confidentiality of their information.
CONCLUSIONS
Future work involving the use of AI for mental health should investigate strategies for conveying the level of AI's accuracy, factors that drive risk, and how data are used confidentially so that patients may work with their health professionals to determine when AI may be beneficial. It will also be important in a mental health context to ensure the patient-health professional relationship is preserved when AI is utilized.
CLINICALTRIAL
Not applicable