Events associated with NPfIT reinforce that the use of IT does create hazardous circumstances and can lead to patient harm or death. Large-scale patient safety events have the potential to affect many patients and clinicians, and this suggests that addressing them should be a priority for all major IT implementations.
Objective Health IT (HIT) systems are increasingly becoming a core infrastructural technology in healthcare. However, failures of these systems, under certain conditions, can lead to patient harm and as such the safety case for HIT has to be explicitly made. This study focuses on safety assurance practices of HIT in England and investigates how clinicians and engineers currently analyse, control and justify HIT safety risks. Methods Three workshops were organised, involving 34 clinical and engineering stakeholders, and centred on predefined risk-based questions. This was followed by a detailed review of the Clinical Safety Case Reports for 20 different national and local systems. The data generated was analysed thematically, considering the clinical, engineering and organisational factors, and was used to examine the often implicit safety argument for HIT. Results Two areas of strength were identified: establishment of a systematic approach to risk management and close engagement by clinicians; and two areas for improvement: greater depth and clarity in hazard analysis practices and greater organisational support for assuring safety. Overall, the dynamic characteristics of healthcare combined with insufficient funding have made it challenging to generate and explain the safety evidence to the required level of detail and rigour. Conclusion Improvements in the form of practical HIT-specific safety guidelines and tools are needed. The lack of publicly available examples of credible HIT safety cases is a major deficit. The availability of these examples can help clarify the significance of the HIT risk analysis evidence and identify the necessary expertise and organisational commitments.
Digital health applications can improve quality and effectiveness of healthcare, by offering a number of new tools to users, which are often considered a medical device. Assuring their safe operation requires, amongst others, clinical validation, needing large datasets to test them in realistic clinical scenarios. Access to datasets is challenging, due to patient privacy concerns. Development of synthetic datasets is seen as a potential alternative. The objective of the paper is the development of a method for the generation of realistic synthetic datasets, statistically equivalent to real clinical datasets, and demonstrate that the Generative Adversarial Network (GAN) based approach is fit for purpose. A generative adversarial network was implemented and trained, in a series of six experiments, using numerical and categorical variables, including ICD-9 and laboratory codes, from three clinically relevant datasets. A number of contextual steps provided the success criteria for the synthetic dataset. A synthetic dataset that exhibits very similar statistical characteristics with the real dataset was generated. Pairwise association of variables is very similar. A high degree of Jaccard similarity and a successful K-S test further support this. The proof of concept of generating realistic synthetic datasets was successful, with the approach showing promise for further work.
Background: Artificial Intelligence (AI) has seen an increased application within digital healthcare interventions (DHIs). DHIs use entails challenges about their safety assurance. Exacerbated by regulatory requirements, in the UK, this places the onus of safety assurance not only on the manufacturer, but also on the operator of a DHI. Clinical Safety claims and evidencing safe implementation and use of AI-based DHIs require expertise, to understand and act to control or mitigate risk. Current health software standards, regulation, and guidance do not provide the insight necessary for safer implementation. Objective: To interpret published guidance and policy related to AI and justify clinical safety assurance of DHIs. Method: Assessment of UK health regulation policy, standards, and AI institution insights, utilizing a published Hazard Assessment framework, to structure safety justifications, and articulate hazards relating to AI-based DHIs. Results: AI enabled DHI hazard identification, relating to implementation and use within healthcare delivery organizations. Conclusion: By application of the method, we postulate that UK research of AI DHIs highlighted issues that may affect safety, in need of consideration to justify safety of a DHI.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.