ObjectiveTo provide an overview of documented studies and initiatives that demonstrate efforts to manage and improve alarm systems for quality in healthcare by human, organisational and technical factors.MethodsA literature review, a grey literature review, interviews and a review of alarm-related standards (IEC 60601-1-8, IEC 62366-1:2015 and ANSI/Advancement of Medical Instrumentation HE 75:2009/2013) were conducted. Qualitative analysis was conducted to identify common themes of improvement elements in the literature and grey literature reviews, interviews and the review of alarm-related standards.Results21 articles and 7 publications on alarm quality improvement work were included in the literature and grey literature reviews, in which 10 themes of improvement elements were identified. The 10 themes were categorised into human factors (alarm training and education, multidisciplinary teamwork, alarm safety culture), organisational factors (alarm protocols and standard procedures, alarm assessment and evaluation, alarm inventory and prioritisation, and sharing and learning) and technical factors (machine learning, alarm configuration and alarm design). 26 clinicians were interviewed. 9 of the 10 themes were identified from the interview responses. The review of the standards identified 3 of the 10 themes. The study findings are also presented in a step-by-step guide to optimise implementation of the improvement elements for healthcare organisations.ConclusionsImproving alarm safety can be achieved by incorporating human, organisational and technical factors in an integrated approach. There is still a gap between alarm-related standards and how the standards are translated into practice, especially in a clinical environment that uses multiple alarming medical devices from different manufacturers. Standardisation across devices and manufacturers and the use of machine learning in improving alarm safety should be discussed in future collaboration between alarm manufacturers, end users and regulators.
User trust in Artificial Intelligence (AI) enabled systems has been increasingly recognized and proven as a key element to fostering adoption. It has been suggested that AI-enabled systems must go beyond technical-centric approaches and towards embracing a more human-centric approach, a core principle of the human-computer interaction (HCI) field. This review aims to provide an overview of the user trust definitions, influencing factors, and measurement methods from 23 empirical studies to gather insight for future technical and design strategies, research, and initiatives to calibrate the user-AI relationship. The findings confirm that there is more than one way to define trust. Selecting the most appropriate trust definition to depict user trust in a specific context should be the focus instead of comparing definitions. User trust in AI-enabled systems is found to be influenced by three main themes, namely socio-ethical considerations, technical and design features, and user characteristics. User characteristics dominate the findings, reinforcing the importance of user involvement from development through to monitoring of AI-enabled systems. Different contexts and various characteristics of both the users and the systems are also found to influence user trust, highlighting the importance of selecting and tailoring features of the system according to the targeted user group's characteristics. Importantly, socio-ethical considerations can pave the way in making sure that the environment where user-AI interactions happen is sufficiently conducive to establish and maintain a trusted relationship. In measuring user trust, surveys are found to be the most common method followed by interviews and focus groups. In conclusion, user trust needs to be addressed directly in every context where AI-enabled systems are being used or discussed. In addition, calibrating the user-AI relationship requires finding the optimal balance that works for not only the user but also the system.
In this short article, benefits of open access articles for industry are discussed from the point of view of the industry as both the authors and readers of open access articles. Open access articles unlock the barrier to share knowledge and experiences and building collaboration -all of which are crucial for an industry that wishes to make a global impact for a sustainable future.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.