Results of radiology imaging studies are not typically comprehensible to patients. With the advances in artificial intelligence (AI) technology in recent years, it is expected that AI technology can aid patients’ understanding of radiology imaging data. The aim of this study is to understand patients’ perceptions and acceptance of using AI technology to interpret their radiology reports. We conducted semi-structured interviews with 13 participants to elicit reflections pertaining to the use of AI technology in radiology report interpretation. A thematic analysis approach was employed to analyze the interview data. Participants have a generally positive attitude toward using AI-based systems to comprehend their radiology reports. AI is perceived to be particularly useful in seeking actionable information, confirming the doctor’s opinions, and preparing for the consultation. However, we also found various concerns related to the use of AI in this context, such as cyber-security, accuracy, and lack of empathy. Our results highlight the necessity of providing AI explanations to promote people’s trust and acceptance of AI. Designers of patient-centered AI systems should employ user-centered design approaches to address patients’ concerns. Such systems should also be designed to promote trust and deliver concerning health results in an empathetic manner to optimize the user experience.
By mapping messages into a large context, we can compute the distances between them, and then classify them. We test this conjecture on Twitter messages: Messages are mapped onto their most similar Wikipedia pages, and the distances between pages are used as a proxy for the distances between messages. This technique yields more accurate classification of a set of Twitter messages than alternative techniques using string edit distance and latent semantic analysis.
With the recent advances in Artificial Intelligence (AI) technology, patient‐facing applications have started embodying this novel technology to deliver timely healthcare information and services to the patient. However, little is known about lay individuals' perceptions and acceptance of AI‐driven, patient‐facing health systems. In this study, we conducted a survey with 203 participants to investigate their perceptions about using AI to consult information related to their diagnostic results and what factors influence their perceptions. Our results showed that despite the awareness and experience of patient‐facing AI systems being low amongst our participants, people had a generally positive attitude towards such systems. A majority of participants indicated a high level of comfortability and willingness to use health AI systems, and agreed AI could help them comprehend diagnostic results. Several intrinsic factors, such as education background and technology literacy, play an important role in people's perceptions of using AI to comprehend diagnostic results. In particular, people with high technology and health literacy, and education levels had more experiences with using AI and tended to trust AI outputs. We conclude this paper by discussing the implications of this work, with an emphasis on enhancing the trustworthiness of AI and bridging the digital divide.
and K-12 STEM Outreach Team Leader at the Renewable Energy and Sustainability Center at Farmingdale State College. Her research interests are engineering technology education, self-directed lifelong learning and the decision-making process in design and manufacturing environments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.