OBJECTIVE: To establish a metric for evaluating hospitalists’ documentation of clinical reasoning in admission notes.
STUDY DESIGN: Retrospective study.
SETTING: Admissions from 2014 to 2017 at three hospitals in Maryland.
PARTICIPANTS: Hospitalist physicians.
MEASUREMENTS: A subset of patients admitted with fever, syncope/dizziness, or abdominal pain were randomly selected. The nine-item Clinical Reasoning in Admission Note Assessment & Plan (CRANAPL) tool was developed to assess the comprehensiveness of clinical reasoning documented in the assessment and plans (A&Ps) of admission notes. Two authors scored all A&Ps by using this tool. A&Ps with global clinical reasoning and global readability/clarity measures were also scored. All data were deidentified prior to scoring.
RESULTS: The 285 admission notes that were evaluated were authored by 120 hospitalists. The mean total CRANAPL score given by both raters was 6.4 (SD 2.2). The intraclass correlation measuring interrater reliability for the total CRANAPL score was 0.83 (95% CI, 0.76-0.87). Associations between the CRANAPL total score and global clinical reasoning score and global readability/clarity measures were statistically significant (P < .001). Notes from academic hospitals had higher CRANAPL scores (7.4 [SD 2.0] and 6.6 [SD 2.1]) than those from the community hospital (5.2 [SD 1.9]), P < .001.
CONCLUSIONS: This study represents the first step to characterizing clinical reasoning documentation in hospital medicine. With some validity evidence established for the CRANAPL tool, it may be possible to assess the documentation of clinical reasoning by hospitalists.