Big data has become the ubiquitous watch word of medical innovation. The rapid development of machine-learning techniques and artificial intelligence, in particular, has promised to revolutionize medical practice from the allocation of resources to the diagnosis of complex diseases. But with big data comes big risks and challenges, among them significant questions about patient privacy. This article outlines the legal and ethical challenges big data brings to patient privacy. It discusses, among other things, how best to conceive of health privacy; the importance of equity, consent, and patient governance in data collection; discrimination in data uses; and how to handle data breaches. It closes by sketching possible ways forward for the regulatory system.
The use of social media as a recruitment tool for research with humans is increasing, and likely to continue to grow. Despite this, to date there has been no specific regulatory guidance and little in the bioethics literature to guide investigators and IRBs faced with navigating the ethical issues it raises. We begin to fill this gap by first defending a non-exceptionalist methodology for assessing social media recruitment; second, examining respect for privacy and investigator transparency as key norms governing social media recruitment; and, finally, analyzing three relatively novel aspects of social media recruitment: (i) the ethical significance of compliance with website ‘terms of use’; (ii) the ethics of recruiting from the online networks of research participants; and (iii) the ethical implications of online communication from and between participants. Two checklists aimed at guiding investigators and IRBs through the ethical issues are included as Appendices.
Effy Vayena and colleagues argue that machine learning in medicine must offer data protection, algorithmic transparency, and accountability to earn the trust of patients and clinicians.
Artificial intelligence (AI) is quickly making inroads into medical practice, especially in forms that rely on machine learning, with a mix of hope and hype. 1 Multiple AI-based products have now been approved or cleared by the US Food and Drug Administration (FDA), and health systems and hospitals are increasingly deploying AI-based systems. 2 For example, medical AI can support clinical decisions, such as recommending drugs or dosages or interpreting radiological images. 2 One key difference from most traditional clinical decision support software is that some medical AI may communicate results or recommendations to the care team without being able to communicate the underlying reasons for those results. 3 Medical AI may be trained in inappropriate environments, using imperfect techniques, or on incomplete data. Even when algorithms are trained as well as possible, they may, for example, miss a tumor in a radiological image or suggest the incorrect dose for a drug or an inappropriate drug. Sometimes, patients will be injured as a result. In this Viewpoint, we discuss when a physician could likely be held liable under current law when using medical AI.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.