Objective To systematically review measures of data quality in electronic patient records (EPRs) in primary care. Design Systematic review of English language publications, 1980-2001. Data sources Bibliographic searches of medical databases, specialist medical informatics databases, conference proceedings, and institutional contacts. Study selection Studies selected according to a predefined framework for categorising review papers. Data extraction Reference standards and measurements used to judge quality. Results Bibliographic searches identified 4589 publications. After primary exclusions 174 articles were classified, 52 of which met the inclusion criteria for review. Selected studies were primarily descriptive surveys. Variability in methods prevented meta-analysis of results. Forty eight publications were concerned with diagnostic data, 37 studies measured data quality, and 15 scoped EPR quality. Reliability of data was assessed with rate comparison. Measures of sensitivity were highly dependent on the element of EPR data being investigated, while the positive predictive value was consistently high, indicating good validity. Prescribing data were generally of better quality than diagnostic or lifestyle data. Conclusion The lack of standardised methods for assessment of quality of data in electronic patient records makes it difficult to compare results between studies. Studies should present data quality measures with clear numerators, denominators, and confidence intervals. Ambiguous terms such as "accuracy" should be avoided unless precisely defined.
Through its involvement in primary care groups (PCGs) general practice is destined to play a major role in the commissioning of UK National Health Service (NHS) services. If this objective is to be achieved then data will be required at PCG level from electronic patient records (EPRs). The aim of this study was to examine the completeness and accuracy of GP computer records over the last five years in two computerized practices. The objectives were: to establish whether data recording has improved; to discover whether cross-platform data extraction using MIQUEST (Morbidity Information Query Export Syntax) has advantages over the different in-system search tools provided in the EMIS and Meditel systems in our pilot practices; and to suggest how the potential of GP databases to be sources of valid data might be realized. We have demonstrated that high standards of data quality can be achieved in general practice and that there are benefits in using the cross-platform data extraction tool MIQUEST. While being vigilant of coding practices that may introduce systematic errors, data extractors should focus on data validation. We surmise that there are both human and technical barriers to achieving high quality data recording that need further research.
Background The quality of data in general practice clinical information systems varies enormously. This variability jeopardises the proposed national strategy for an integrated care records service and the capacity of primary care organisations to respond coherently to the demands of clinical governance and the proposed quality-based general practice contract. This is apparent in the difficulty in automating the audit process and in comparing aggregated data from different practices. In an attempt to provide data of adequate quality to support such operational needs, increasing emphasis is being placed on the standardisation of data recording. Objective To develop a conceptual framework to facilitate the recording of standardised data within primary care. Method A multiprofessional group of primary care members from the South Thames Research Network examined leading guidelines for best practice. Using the nominal group technique the group prioritised the information needs of primary care organisations for managing coronary heart disease according to current evidence. Results Information needs identified were prioritised and stratified into a functional framework. Conclusion It has been possible within the context of a primary care research network to produce a framework for standardising data collection. Motivation of front-line clinicians was achieved through the incorporation of their views into the synthesis of the dataset.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.