The last few years have seen a proliferation of principles for AI ethics. There is substantial overlap between different sets of principles, with widespread agreement that AI should be used for the common good, should not be used to harm people or undermine their rights, and should respect widely held values such as fairness, privacy, and autonomy. While articulating and agreeing on principles is important, it is only a starting point. Drawing on comparisons with the field of bioethics, we highlight some of the limitations of principles: in particular, they are often too broad and high-level to guide ethics in practice. We suggest that an important next step for the field of AI ethics is to focus on exploring the tensions that inevitably arise as we try to implement principles in practice. By explicitly recognising these tensions we can begin to make decisions about how they should be resolved in specific cases, and develop frameworks and guidelines for AI ethics that are rigorous and practically relevant. We discuss some different specific ways that tensions arise in AI ethics, and what processes might be needed to resolve them.
Background: The current COVID-19 pandemic requires sustainable behavior change to mitigate the impact of the virus. A phenomenon which has arisen in parallel with this pandemic is an infodemic-an overabundance of information, of which some is accurate and some is not, making it hard for people to find trustworthy and reliable guidance to make informed decisions. This infodemic has also been found to create distress and increase risks for mental health disorders, such as depression and anxiety. Aim: To propose practical guidelines for public health and risk communication that will enhance current recommendations and will cut through the infodemic, supporting accessible, reliable, actionable, and inclusive communication. The guidelines aim to support basic human psychological needs of autonomy, competence, and relatedness to support well-being and sustainable behavior change. Method: We applied the Self-Determination Theory (SDT) and concepts from psychology, philosophy and human computer interaction to better understand human behaviors and motivations and propose practical guidelines for public health communication focusing on well-being and sustainable behavior change. We then systematically searched the literature for research on health communication strategies during COVID-19 to discuss our proposed guidelines in light of the emerging literature. We illustrate the guidelines in a communication case study: wearing face-coverings. Findings: We propose five practical guidelines for public health and risk communication that will cut through the infodemic and support well-being and sustainable behavior change: (1) create an autonomy-supportive health care climate; (2) provide choice; (3) apply a bottom-up approach to communication; (4) create solidarity; (5) be transparent and acknowledge uncertainty.
Aim To develop a consensus paper on the central points of an international invitational think‐tank on nursing and artificial intelligence (AI). Methods We established the Nursing and Artificial Intelligence Leadership (NAIL) Collaborative, comprising interdisciplinary experts in AI development, biomedical ethics, AI in primary care, AI legal aspects, philosophy of AI in health, nursing practice, implementation science, leaders in health informatics practice and international health informatics groups, a representative of patients and the public, and the Chair of the ITU/WHO Focus Group on Artificial Intelligence for Health. The NAIL Collaborative convened at a 3‐day invitational think tank in autumn 2019. Activities included a pre‐event survey, expert presentations and working sessions to identify priority areas for action, opportunities and recommendations to address these. In this paper, we summarize the key discussion points and notes from the aforementioned activities. Implications for nursing Nursing's limited current engagement with discourses on AI and health posts a risk that the profession is not part of the conversations that have potentially significant impacts on nursing practice. Conclusion There are numerous gaps and a timely need for the nursing profession to be among the leaders and drivers of conversations around AI in health systems. Impact We outline crucial gaps where focused effort is required for nursing to take a leadership role in shaping AI use in health systems. Three priorities were identified that need to be addressed in the near future: (a) Nurses must understand the relationship between the data they collect and AI technologies they use; (b) Nurses need to be meaningfully involved in all stages of AI: from development to implementation; and (c) There is a substantial untapped and an unexplored potential for nursing to contribute to the development of AI technologies for global health and humanitarian efforts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.