2023
DOI: 10.1136/bmjopen-2022-066322
|View full text |Cite
|
Sign up to set email alerts
|

Public perceptions on the application of artificial intelligence in healthcare: a qualitative meta-synthesis

Chenxi Wu,
Huiqiong Xu,
Dingxi Bai
et al.

Abstract: ObjectivesMedical artificial intelligence (AI) has been used widely applied in clinical field due to its convenience and innovation. However, several policy and regulatory issues such as credibility, sharing of responsibility and ethics have raised concerns in the use of AI. It is therefore necessary to understand the general public’s views on medical AI. Here, a meta-synthesis was conducted to analyse and summarise the public’s understanding of the application of AI in the healthcare field, to provide recomme… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

6
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 24 publications
(14 citation statements)
references
References 60 publications
6
8
0
Order By: Relevance
“…Despite agreeing that artificial intelligence would play an integral role in future healthcare, two-thirds of the respondents were of the view that the adoption of artificial intelligence can raise new ethical challenges. This is similar to the report from a study in China (Wu et al, 2023). Patients' confidentiality is essential for high-quality healthcare delivery.…”
Section: Discussionsupporting
confidence: 85%
“…Despite agreeing that artificial intelligence would play an integral role in future healthcare, two-thirds of the respondents were of the view that the adoption of artificial intelligence can raise new ethical challenges. This is similar to the report from a study in China (Wu et al, 2023). Patients' confidentiality is essential for high-quality healthcare delivery.…”
Section: Discussionsupporting
confidence: 85%
“…Participants in our study cited concerns consistent with previous work related to AI accuracy, risk of harm (e.g., wrong diagnosis, inappropriate treatment), 42,51,52 decreased human communication/connection, 34,42,46,51 and issues pertaining to confidentiality. 34,42,46,[52][53][54][55] Issues related to privacy were also the most commonly mentioned concern in the qualitative feedback.…”
Section: Concerns Regarding Ai For Mental Health (Rq2)supporting
confidence: 67%
“…In line with the prototype perception literature ( Gibbons and Gerrard, 1995 ; Lazuras et al, 2019 ) and existing public perception research on medical AI ( Esmaeilzadeh, 2020 ; Wu et al, 2023 ), the participants were asked to self-report their perceptions of the typical risk characteristics of health chatbots. An item example is as follows: “Health chatbots are dangerous” (Cronbach’s α = 0.841).…”
Section: Methodsmentioning
confidence: 99%