Surveys about the usability of EHR systems are needed to monitor their development at regional and national levels. To our knowledge, this study is the first national eHealth observatory questionnaire that focuses on usability and is used to monitor the long-term development of EHRs. The results do not show notable improvements in physician's ratings for their EHRs between the years 2010 and 2014 in Finland. Instead, the results indicate the existence of serious problems and deficiencies which considerably hinder the efficiency of EHR use and physician's routine work. The survey results call for considerable amount of development work in order to achieve the expected benefits of EHR systems and to avoid technology-induced errors which may endanger patient safety. The findings of repeated surveys can be used to inform healthcare providers, decision makers and politicians about the current state of EHR usability and differences between brands as well as for improvements of EHR usability. This survey will be repeated in 2017 and there is a plan to include other healthcare professional groups in future surveys.
Welfare, Helsinki, Finland Scand J Caring Sci; 2014; 28; 629-647 Impacts of structuring nursing records: a systematic review Aim: The study aims to describe the impacts of different data structuring methods used in nursing records or care plans. This systematic review examines what kinds of structuring methods have been evaluated and the effects of data structures on healthcare input, processes and outcomes in previous studies. Materials and Methods: Retrieval from 15 databases yielded 143 papers. Based on Population (Participants), Intervention, Comparators, Outcomes elements and exclusion and inclusion criteria, the search produced 61 studies. A data extraction tool and analysis for empirical articles were used to classify the data referring to the study aim. Thirty-eight studies were included in the final analysis. Findings: The study design most often used was a single measurement without any control. The studies were conducted mostly in secondary or tertiary care in institutional care contexts. The standards used in documentation were nursing classifications or the nursing process model in clinical use. The use of standardised nursing language (SNL) increased descriptions of nursing interventions and outcomes supporting daily care, and improving patient safety and information reuse. Discussion: The nursing process model and classifications are used internationally as nursing data structures in nursing records and care plans. The use of SNL revealed various positive impacts. Unexpected outcomes were most often related to lack of resources. Limitations: Indexing of SNL studies has not been consistent. That might cause bias in database retrieval, and important articles may be lacking. The study design of the studies analysed varied widely. Further, the time frame of papers was quite long, causing confusion in descriptions of nursing data structures. Conclusion: The value of SNL is proven by its support of daily workflow, delivery of nursing care and data reuse. This facilitates continuity of care, thus contributing to patient safety. Nurses need more education and managerial support in order to be able to benefit from SNL.
Objectives: This paper draws attention to: i) key considerations for evaluating artificial intelligence (AI) enabled clinical decision support; and ii) challenges and practical implications of AI design, development, selection, use, and ongoing surveillance.
Method: A narrative review of existing research and evaluation approaches along with expert perspectives drawn from the International Medical Informatics Association (IMIA) Working Group on Technology Assessment and Quality Development in Health Informatics and the European Federation for Medical Informatics (EFMI) Working Group for Assessment of Health Information Systems.
Results: There is a rich history and tradition of evaluating AI in healthcare. While evaluators can learn from past efforts, and build on best practice evaluation frameworks and methodologies, questions remain about how to evaluate the safety and effectiveness of AI that dynamically harness vast amounts of genomic, biomarker, phenotype, electronic record, and care delivery data from across health systems. This paper first provides a historical perspective about the evaluation of AI in healthcare. It then examines key challenges of evaluating AI-enabled clinical decision support during design, development, selection, use, and ongoing surveillance. Practical aspects of evaluating AI in healthcare, including approaches to evaluation and indicators to monitor AI are also discussed.
Conclusion: Commitment to rigorous initial and ongoing evaluation will be critical to ensuring the safe and effective integration of AI in complex sociotechnical settings. Specific enhancements that are required for the new generation of AI-enabled clinical decision support will emerge through practical application.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.