Results of radiology imaging studies are not typically comprehensible to patients. With the advances in artificial intelligence (AI) technology in recent years, it is expected that AI technology can aid patients’ understanding of radiology imaging data. The aim of this study is to understand patients’ perceptions and acceptance of using AI technology to interpret their radiology reports. We conducted semi-structured interviews with 13 participants to elicit reflections pertaining to the use of AI technology in radiology report interpretation. A thematic analysis approach was employed to analyze the interview data. Participants have a generally positive attitude toward using AI-based systems to comprehend their radiology reports. AI is perceived to be particularly useful in seeking actionable information, confirming the doctor’s opinions, and preparing for the consultation. However, we also found various concerns related to the use of AI in this context, such as cyber-security, accuracy, and lack of empathy. Our results highlight the necessity of providing AI explanations to promote people’s trust and acceptance of AI. Designers of patient-centered AI systems should employ user-centered design approaches to address patients’ concerns. Such systems should also be designed to promote trust and deliver concerning health results in an empathetic manner to optimize the user experience.
Notes: Questions with a hash tag (#) are displayed only when certain response is provided. Questions with an asterisk (*) allow participants to select more than one options. Screen Questions Did you take any clinical lab test (e.g., blood test, urine test, MRI, CT-scan) over the past six months? • Yes • No
With the recent advances in Artificial Intelligence (AI) technology, patient‐facing applications have started embodying this novel technology to deliver timely healthcare information and services to the patient. However, little is known about lay individuals' perceptions and acceptance of AI‐driven, patient‐facing health systems. In this study, we conducted a survey with 203 participants to investigate their perceptions about using AI to consult information related to their diagnostic results and what factors influence their perceptions. Our results showed that despite the awareness and experience of patient‐facing AI systems being low amongst our participants, people had a generally positive attitude towards such systems. A majority of participants indicated a high level of comfortability and willingness to use health AI systems, and agreed AI could help them comprehend diagnostic results. Several intrinsic factors, such as education background and technology literacy, play an important role in people's perceptions of using AI to comprehend diagnostic results. In particular, people with high technology and health literacy, and education levels had more experiences with using AI and tended to trust AI outputs. We conclude this paper by discussing the implications of this work, with an emphasis on enhancing the trustworthiness of AI and bridging the digital divide.
BACKGROUND Patients are increasingly able to access their laboratory test results via patient portals. However, merely providing access does not guarantee comprehension. Patients could experience confusion or even anxiety when reviewing their test results. OBJECTIVE Our objective was to examine the challenges and needs that patients have when comprehending laboratory test results. METHODS We conducted an online survey with 203 participants and a set of semi-structured interviews with13 participants. We assessed 1) patients’ perceived challenges and needs (both informational and technological needs) when they attempt to comprehend test results; 2) what factors are associated with patients’ perceptions; and 3) strategies for improving the design of patient portals to communicate lab test results more effectively. Descriptive statistics and thematic analysis were used to analyze the survey and interview data, respectively. RESULTS We found sociodemographic factors and lab result normality have significant impacts on people’s perceptions of using portals to view and interpret lab results. Patients need a variety of information to comprehend lab results, which are grouped into two categories – generic information (e.g., medical terminology, reference range, and diagnostic abilities of a specific test), and personalized or contextual information (e.g., the meaning of lab value in relation to their health, what to do next, and what to ask their doctor in clinic visit). The desired enhancements of patient portals include providing easy-to-understand data visualization and terminologies, explaining the result in the context of the patient’s health, supplementing educational services, increasing usability and accessibility, as well as incorporating artificial intelligence (AI)-based technology. CONCLUSIONS Patients face significant challenges in interpreting the meaning of lab test results. Designers and developers of patient portals should consider employing user-centered approaches to improve the design of patient portals to present information in a more meaningful way.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.