Background Providing understandable information to patients is necessary to achieve the aims of the Informed Consent process: respecting and promoting patients’ autonomy and protecting patients from harm. In recent decades, new, primarily digital technologies have been used to apply and test innovative formats of Informed Consent. We conducted a systematic review to explore the impact of using digital tools for Informed Consent in both clinical research and in clinical practice. Understanding, satisfaction and participation were compared for digital tools versus the non-digital Informed Consent process. Methods We searched for studies on available electronic databases, including Pubmed, EMBASE, and Cochrane. Studies were identified using specific Mesh-terms/keywords. We included studies, published from January 2012 to October 2020, that focused on the use of digital Informed Consent tools for clinical research, or clinical procedures. Digital interventions were defined as interventions that used multimedia or audio–video to provide information to patients. We classified the interventions into 3 different categories: video only, non-interactive multimedia, and interactive multimedia. Results Our search yielded 19,579 publications. After title and abstract screening 100 studies were retained for full-text analysis, of which 73 publications were included. Studies examined interactive multimedia (29/73), non-interactive multimedia (13/73), and videos (31/73), and most (34/38) studies were conducted on adults. Innovations in consent were tested for clinical/surgical procedures (26/38) and clinical research (12/38). For research IC, 21 outcomes were explored, with a positive effect on at least one of the studied outcomes being observed in 8/12 studies. For clinical/surgical procedures 49 outcomes were explored, and 21/26 studies reported a positive effect on at least one of the studied outcomes. Conclusions Digital technologies for informed consent were not found to negatively affect any of the outcomes, and overall, multimedia tools seem desirable. Multimedia tools indicated a higher impact than videos only. Presence of a researcher may potentially enhance efficacy of different outcomes in research IC processes. Studies were heterogeneous in design, making evaluation of impact challenging. Robust study design including standardization is needed to conclusively assess impact.
Artificial Intelligence (AI) has the potential to greatly improve the delivery of healthcare and other services that advance population health and wellbeing. However, the use of AI in healthcare also brings potential risks that may cause unintended harm. To guide future developments in AI, the High-Level Expert Group on AI set up by the European Commission (EC), recently published ethics guidelines for what it terms “trustworthy” AI. These guidelines are aimed at a variety of stakeholders, especially guiding practitioners toward more ethical and more robust applications of AI. In line with efforts of the EC, AI ethics scholarship focuses increasingly on converting abstract principles into actionable recommendations. However, the interpretation, relevance, and implementation of trustworthy AI depend on the domain and the context in which the AI system is used. The main contribution of this paper is to demonstrate how to use the general AI HLEG trustworthy AI guidelines in practice in the healthcare domain. To this end, we present a best practice of assessing the use of machine learning as a supportive tool to recognize cardiac arrest in emergency calls. The AI system under assessment is currently in use in the city of Copenhagen in Denmark. The assessment is accomplished by an independent team composed of philosophers, policy makers, social scientists, technical, legal, and medical experts. By leveraging an interdisciplinary team, we aim to expose the complex trade-offs and the necessity for such thorough human review when tackling socio-technical applications of AI in healthcare. For the assessment, we use a process to assess trustworthy AI, called 1Z-Inspection® to identify specific challenges and potential ethical trade-offs when we consider AI in practice.
The article focuses on the philosophical, ethical and juridical problems concerning Advance directives and Living Wills (underlining analogies and differences). The author makes a critical comparison between the theories supporting Living Wills (the liberal theory appealing to the principle of self-determination and the utilitarian theory on the "quality of life") and the reasons against them (with reference to the foundation of the absolute value and dignity of human life till the end).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.