This is a condensed summary of an international multisociety statement on ethics of artificial intelligence (AI) in radiology produced by the ACR, European Society of Radiology, RSNA, Society for Imaging Informatics in Medicine, European Society of Medical Imaging Informatics, Canadian Association of Radiologists, and American Association of Physicists in Medicine. AI has great potential to increase efficiency and accuracy throughout radiology, but it also carries inherent pitfalls and biases. Widespread use of AI-based intelligent and autonomous systems in radiology can increase the risk of systemic errors with high consequence and highlights complex ethical and societal issues. Currently, there is little experience using AI for patient care in diverse clinical settings. Extensive research is needed to understand how to best deploy AI in clinical practice. This statement highlights our consensus that ethical use of AI in radiology should promote well-being, minimize harm, and ensure that the benefits and harms are distributed among stakeholders in a just manner. We believe AI should respect human rights and freedoms, including dignity and privacy. It should be designed for maximum transparency and dependability. Ultimate responsibility and accountability for AI remains with its human designers and operators for the foreseeable future. The radiology community should start now to develop codes of ethics and practice for AI that promote any use that helps patients and the common good and should block use of radiology data and algorithms for financial gain without those two attributes.
This is a condensed summary of an international multisociety statement on ethics of artificial intelligence (AI) in radiology produced by the ACR, European Society of Radiology, RSNA, Society for Imaging Informatics in Medicine, European Society of Medical Imaging Informatics, Canadian Association of Radiologists, and American Association of Physicists in Medicine. AI has great potential to increase efficiency and accuracy throughout radiology, but also carries inherent pitfalls and biases. Widespread use of AI-based intelligent and autonomous systems in radiology can increase the risk of systemic errors with high consequence, and highlights complex ethical and societal issues. Currently, there is little experience using AI for patient care in diverse clinical settings. Extensive research is needed to understand how to best deploy AI in clinical practice. This statement highlights our consensus that ethical use of AI in radiology should promote well-being, minimize harm, and ensure that the benefits and harms are distributed among stakeholders in a just manner. We believe AI should respect human rights and freedoms, including dignity and privacy. It should be designed for maximum transparency and dependability. Ultimate responsibility and accountability for AI remains with its human designers and operators for the foreseeable future. The radiology community should start now to develop codes of ethics and practice for AI which promote any use that helps patients and the common good and should block use of radiology data and algorithms for financial gain without those two attributes.
The full version (Appendix E1 [online]) is posted on the web pages of each of these societies. Authors include society representatives, patient advocates, an American professor of philosophy, and attorneys with experience in radiology and privacy in the United States and the European Union. Artificial intelligence (AI), defined as computers that behave in ways that previously were thought to require human intelligence, has the potential to substantially improve radiology, help patients, and decrease cost (1). Radiologists are experts at acquiring information from medical images. AI can extend this expertise, extracting even more information to make better or entirely new predictions about patients. Going forward, conclusions about images will be made by human radiologists in conjunction with intelligent and autonomous machines. Although the machines will make mistakes, they are likely to make decisions more efficiently and with more consistency than humans and in some instances will contradict human radiologists and be proven to be correct. AI will affect image interpretation, report generation, result communication, and billing practice (1,2). AI has the potential to alter professional relationships, patient engagement, knowledge hierarchy, and the labor market. Additionally, AI may exacerbate the concentration and imbalance of resources, with entities that have significant AI resources having more "radiology decision-making" capabilities. Radiologists and radiology departments will also be data, categorized and evaluated by AI models. AI will infer patterns in personal, professional, and institutional behavior. The value, ownership, use of, and access to radiology data have taken on new meanings and significance in the era of AI. AI is complex and carries potential pitfalls and inherent biases. Widespread use of AI-based intelligent and autonomous machines in radiology can increase systemic risks of harm, raise the possibility of errors with high consequences, and amplify complex ethical and societal issues.
Lung cancer is the leading cause of cancer-related deaths worldwide, accounting for almost a fifth of all cancerrelated deaths. Annual computed tomographic lung cancer screening (CTLS) detects lung cancer at earlier stages and reduces lung cancer-related mortality among high-risk individuals. Many medical organizations, including the U.S. Preventive Services Task Force, recommend annual CTLS in high-risk populations. However, fewer than 5% of individuals worldwide at high risk for lung cancer have undergone screening. In large part, this is owing to delayed implementation of CTLS in many countries throughout the world. Factors contributing to low uptake in countries with longstanding CTLS endorsement, such as the United States, include lack of patient and clinician awareness of current recommendations in favor of CTLS and clinician concerns about CTLS-related radiation exposure, false-positive results, overdiagnosis, and cost. This review of the literature serves to address these concerns by evaluating the potential risks and benefits of CTLS. Review of key components of a lung screening program, along with an updated shared decision aid, provides guidance for program development and optimization. Review of studies evaluating the population considered "high-risk" is included as this may affect future guidelines within the United States and other countries considering lung screening implementation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.