Diagnostic AEs represent an important error type, and the consequences of DAEs are severe. The causes of DAEs were mostly human, with the main causes being knowledge-based mistakes and information transfer problems. Prevention strategies should focus on training physicians and on the organization of knowledge and information transfer.
Diagnostic errors have emerged as a serious patient safety problem but they are hard to detect and complex to define. At the research summit of the 2013 Diagnostic Error in Medicine 6th International Conference, we convened a multidisciplinary expert panel to discuss challenges in defining and measuring diagnostic errors in real-world settings. In this paper, we synthesize these discussions and outline key research challenges in operationalizing the definition and measurement of diagnostic error. Some of these challenges include 1) difficulties in determining error when the disease or diagnosis is evolving over time and in different care settings, 2) accounting for a balance between underdiagnosis and overaggressive diagnostic pursuits, and 3) determining disease diagnosis likelihood and severity in hindsight. We also build on these discussions to describe how some of these challenges can be addressed while conducting research on measuring diagnostic error.
BackgroundPatient record review is believed to be the most useful method for estimating the rate of adverse events among hospitalised patients. However, the method has some practical and financial disadvantages. Some of these disadvantages might be overcome by using existing reporting systems in which patient safety issues are already reported, such as incidents reported by healthcare professionals and complaints and medico-legal claims filled by patients or their relatives. The aim of the study is to examine to what extent the hospital reporting systems cover the adverse events identified by patient record review.MethodsWe conducted a retrospective study using a database from a record review study of 5375 patient records in 14 hospitals in the Netherlands. Trained nurses and physicians using a method based on the protocol of The Harvard Medical Practice Study previously reviewed the records. Four reporting systems were linked with the database of reviewed records: 1) informal and 2) formal complaints by patients/relatives, 3) medico-legal claims by patients/relatives and 4) incident reports by healthcare professionals. For each adverse event identified in patient records the equivalent was sought in these reporting systems by comparing dates and descriptions of the events. The study focussed on the number of adverse event matches, overlap of adverse events detected by different sources, preventability and severity of consequences of reported and non-reported events and sensitivity and specificity of reports.ResultsIn the sample of 5375 patient records, 498 adverse events were identified. Only 18 of the 498 (3.6%) adverse events identified by record review were found in one or more of the four reporting systems. There was some overlap: one adverse event had an equivalent in both a complaint and incident report and in three cases a patient/relative used two or three systems to complain about an adverse event. Healthcare professionals reported relatively more preventable adverse events than patients.Reports are not sensitive for adverse events nor do reports have a positive predictive value.ConclusionsIn order to detect the same adverse events as identified by patient record review, one cannot rely on the existing reporting systems within hospitals.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.