Background Explainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare. Even though AI-driven systems have been shown to outperform humans in certain analytical tasks, the lack of explainability continues to spark criticism. Yet, explainability is not a purely technological issue, instead it invokes a host of medical, legal, ethical, and societal questions that require thorough exploration. This paper provides a comprehensive assessment of the role of explainability in medical AI and makes an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice. Methods Taking AI-based clinical decision support systems as a case in point, we adopted a multidisciplinary approach to analyze the relevance of explainability for medical AI from the technological, legal, medical, and patient perspectives. Drawing on the findings of this conceptual analysis, we then conducted an ethical assessment using the “Principles of Biomedical Ethics” by Beauchamp and Childress (autonomy, beneficence, nonmaleficence, and justice) as an analytical framework to determine the need for explainability in medical AI. Results Each of the domains highlights a different set of core considerations and values that are relevant for understanding the role of explainability in clinical practice. From the technological point of view, explainability has to be considered both in terms how it can be achieved and what is beneficial from a development perspective. When looking at the legal perspective we identified informed consent, certification and approval as medical devices, and liability as core touchpoints for explainability. Both the medical and patient perspectives emphasize the importance of considering the interplay between human actors and medical AI. We conclude that omitting explainability in clinical decision support systems poses a threat to core ethical values in medicine and may have detrimental consequences for individual and public health. Conclusions To ensure that medical AI lives up to its promises, there is a need to sensitize developers, healthcare professionals, and legislators to the challenges and limitations of opaque algorithms in medical AI and to foster multidisciplinary collaboration moving forward.
Effy Vayena and colleagues argue that machine learning in medicine must offer data protection, algorithmic transparency, and accountability to earn the trust of patients and clinicians.
BackgroundInformation and communication technologies have long become prominent components of health systems. Rapid advances in digital technologies and data science over the last few years are predicted to have a vast impact on health care services, configuring a paradigm shift into what is now commonly referred to as digital health. Forecasted to curb rising health costs as well as to improve health system efficiency and safety, digital health success heavily relies on trust from professional end users, administrators, and patients. Yet, what counts as the building blocks of trust in digital health systems has so far remained underexplored.ObjectiveThe objective of this study was to analyze what relevant stakeholders consider as enablers and impediments of trust in digital health.MethodsWe performed a scoping review to map out trust in digital health. To identify relevant digital health studies, we searched 5 electronic databases. Using keywords and Medical Subject Headings, we targeted all relevant studies and set no boundaries for publication year to allow a broad range of studies to be identified. The studies were screened by 2 reviewers after which a predefined data extraction strategy was employed and relevant themes documented.ResultsOverall, 278 qualitative, quantitative, mixed-methods, and intervention studies in English, published between 1998 and 2017 and conducted in 40 countries were included in this review. Patients and health care professionals were the two most prominent stakeholders of trust in digital health; a third—health administrators—was substantially less prominent. Our analysis identified cross-cutting personal, institutional, and technological elements of trust that broadly cluster into 16 enablers (altruism, fair data access, ease of use, self-efficacy, sociodemographic factors, recommendation by other users, usefulness, customizable design features, interoperability, privacy, initial face-to-face contact, guidelines for standardized use, stakeholder engagement, improved communication, decreased workloads, and service provider reputation) and 10 impediments (excessive costs, limited accessibility, sociodemographic factors, fear of data exploitation, insufficient training, defective technology, poor information quality, inadequate publicity, time-consuming, and service provider reputation) to trust in digital health.ConclusionsTrust in digital health technologies and services depends on the interplay of a complex set of enablers and impediments. This study is a contribution to ongoing efforts to understand what determines trust in digital health according to different stakeholders. Therefore, it offers valuable points of reference for the implementation of innovative digital health services. Building on insights from this study, actionable metrics can be developed to assess the trustworthiness of digital technologies in health care.
We explored the characteristics and motivations of people who, having obtained their genetic or genomic data from Direct-To-Consumer genetic testing (DTC-GT) companies, voluntarily decide to share them on the publicly accessible web platform openSNP. The study is the first attempt to describe open data sharing activities undertaken by individuals without institutional oversight. In the paper we provide a detailed overview of the distribution of the demographic characteristics and motivations of people engaged in genetic or genomic open data sharing. The geographical distribution of the respondents showed the USA as dominant. There was no significant gender divide, the age distribution was broad, educational background varied and respondents with and without children were equally represented. Health, even though prominent, was not the respondents’ primary or only motivation to be tested. As to their motivations to openly share their data, 86.05% indicated wanting to learn about themselves as relevant, followed by contributing to the advancement of medical research (80.30%), improving the predictability of genetic testing (76.02%) and considering it fun to explore genotype and phenotype data (75.51%). Whereas most respondents were well aware of the privacy risks of their involvement in open genetic data sharing and considered the possibility of direct, personal repercussions troubling, they estimated the risk of this happening to be negligible. Our findings highlight the diversity of DTC-GT consumers who decide to openly share their data. Instead of focusing exclusively on health-related aspects of genetic testing and data sharing, our study emphasizes the importance of taking into account benefits and risks that stretch beyond the health spectrum. Our results thus lend further support to the call for a broader and multi-faceted conceptualization of genomic utility.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.