Introduction:
Communication failures pose a significant threat to the quality of care and safety of hospitalized patients. Yet little is known about the nature of communication failures. The aims of this study were to identify and describe types of communication failures in which nurses and physicians were involved and determine how different types of communication failures might affect patient outcomes.
Methods:
Incident reports filed during fiscal year 2015–2016 at a Midwestern academic health care system (N = 16,165) were electronically filtered and manually reviewed to identify reports that described communication failures involving nurses and physicians (n = 161). Failures were categorized by type using two classification systems: contextual and conceptual. Thematic analysis was used to identify patient outcomes: actual or potential harm, patient dissatisfaction, delay in care, or no harm. Frequency of failure types and outcomes were assessed using descriptive statistics. Associations between failure type and patient outcomes were evaluated using Fisher's exact test.
Results:
Of the 211 identified contextual communication failures, errors of omission were the most common (27.0%). More than half of conceptual failures were transfer of information failures (58.4%), while 41.6% demonstrated a lack of shared understanding. Of the 179 identified outcomes, 38.0% were delays in care, 20.1% were physical harm, and 8.9% were dissatisfaction. There was no statistically significant association between failure type category and patient outcomes.
Conclusion:
It was found that incident reports could identify specific types of communication failures and patient outcomes. This work provides a basis for future intervention development to prevent communication-related adverse events by tailoring interventions to specific types of failures.
The VRE process did generate increased reflection in both nurse and physician participants. Moreover, VRE has utility in assessing communication and, based on the comments of our participants, can serve as an intervention to possibly improve communication, with implications for patient safety.
Background The lack of machine-interpretable representations of consent permissions precludes development of tools that act upon permissions across information ecosystems, at scale.
Objectives To report the process, results, and lessons learned while annotating permissions in clinical consent forms.
Methods We conducted a retrospective analysis of clinical consent forms. We developed an annotation scheme following the MAMA (Model-Annotate-Model-Annotate) cycle and evaluated interannotator agreement (IAA) using observed agreement (A
o), weighted kappa (κw
), and Krippendorff's α.
Results The final dataset included 6,399 sentences from 134 clinical consent forms. Complete agreement was achieved for 5,871 sentences, including 211 positively identified and 5,660 negatively identified as permission-sentences across all three annotators (A
o = 0.944, Krippendorff's α = 0.599). These values reflect moderate to substantial IAA. Although permission-sentences contain a set of common words and structure, disagreements between annotators are largely explained by lexical variability and ambiguity in sentence meaning.
Conclusion Our findings point to the complexity of identifying permission-sentences within the clinical consent forms. We present our results in light of lessons learned, which may serve as a launching point for developing tools for automated permission extraction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.