As the efficacy of artificial intelligence (AI) in improving aspects of healthcare delivery is increasingly becoming evident, it becomes likely that AI will be incorporated in routine clinical care in the near future. This promise has led to growing focus and investment in AI medical applications both from governmental organizations and technological companies. However, concern has been expressed about the ethical and regulatory aspects of the application of AI in health care. These concerns include the possibility of biases, lack of transparency with certain AI algorithms, privacy concerns with the data used for training AI models, and safety and liability issues with AI application in clinical environments. While there has been extensive discussion about the ethics of AI in health care, there has been little dialogue or recommendations as to how to practically address these concerns in health care. In this article, we propose a governance model that aims to not only address the ethical and regulatory issues that arise out of the application of AI in health care, but also stimulate further discussion about governance of AI in health care.
Online exam supervision technologies have recently generated significant controversy and concern. Their use is now booming due to growing demand for online courses and for off-campus assessment options amid COVID-19 lockdowns. Online proctoring technologies purport to effectively oversee students sitting online exams by using artificial intelligence (AI) systems supplemented by human invigilators. Such technologies have alarmed some students who see them as a “Big Brother-like” threat to liberty and privacy, and as potentially unfair and discriminatory. However, some universities and educators defend their judicious use. Critical ethical appraisal of online proctoring technologies is overdue. This essay provides one of the first sustained moral philosophical analyses of these technologies, focusing on ethical notions of academic integrity, fairness, non-maleficence, transparency, privacy, autonomy, liberty, and trust. Most of these concepts are prominent in the new field of AI ethics, and all are relevant to education. The essay discusses these ethical issues. It also offers suggestions for educational institutions and educators interested in the technologies about the kinds of inquiries they need to make and the governance and review processes they might need to adopt to justify and remain accountable for using online proctoring technologies. The rapid and contentious rise of proctoring software provides a fruitful ethical case study of how AI is infiltrating all areas of life. The social impacts and moral consequences of this digital technology warrant ongoing scrutiny and study.
Artificial intelligence (AI) is increasingly of tremendous interest in the medical field. How-ever, failures of medical AI could have serious consequences for both clinical outcomes and the patient experience. These consequences could erode public trust in AI, which could in turn undermine trust in our healthcare institutions. This article makes 2 contributions. First, it describes the major conceptual, technical, and humanistic challenges in medical AI. Second, it proposes a solution that hinges on the education and accreditation of new expert groups who specialize in the development, verification, and operation of medical AI technologies. These groups will be required to maintain trust in our healthcare institutions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.