2019
DOI: 10.12788/jhm.3233
|View full text |Cite
|
Sign up to set email alerts
|

Documentation of Clinical Reasoning in Admission Notes of Hospitalists: Validation of the CRANAPL Assessment Rubric

Abstract: OBJECTIVE: To establish a metric for evaluating hospitalists’ documentation of clinical reasoning in admission notes. STUDY DESIGN: Retrospective study. SETTING: Admissions from 2014 to 2017 at three hospitals in Maryland. PARTICIPANTS: Hospitalist physicians. MEASUREMENTS: A subset of patients admitted with fever, syncope/dizziness, or abdominal pain were randomly selected. The nine-item Clinical Reasoning in Admission Note Assessment & Plan (CRANAPL) tool was developed to assess the comprehensiveness of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(10 citation statements)
references
References 43 publications
0
10
0
Order By: Relevance
“…We believe this enhances the existing tool as the addition of the descriptive anchors creates a shared mental model for feedback on clinical reasoning documentation focused on residents and fellows. 11,[20][21][22] We demonstrated validity evidence for this rubric using Messick's framework, ultimately creating a rubric that was easy to use with minimal training of faculty, thus easily implementable. Additionally, we demonstrated good reliability with an intraclass correlation between raters of 0.84, which is higher than what was described with the original IDEA assessment tool and higher than what we experienced using the original IDEA assessment tool in the first round of the iterative process of development.…”
Section: Discussionmentioning
confidence: 89%
See 1 more Smart Citation
“…We believe this enhances the existing tool as the addition of the descriptive anchors creates a shared mental model for feedback on clinical reasoning documentation focused on residents and fellows. 11,[20][21][22] We demonstrated validity evidence for this rubric using Messick's framework, ultimately creating a rubric that was easy to use with minimal training of faculty, thus easily implementable. Additionally, we demonstrated good reliability with an intraclass correlation between raters of 0.84, which is higher than what was described with the original IDEA assessment tool and higher than what we experienced using the original IDEA assessment tool in the first round of the iterative process of development.…”
Section: Discussionmentioning
confidence: 89%
“…[12][13][14][15][16][17] There are several note rating instruments that have been validated to assess documentation quality such as QNOTE, PDQI-9, the RED checklist, the HAPA form, the P-HAPEE rubric, the IDEA assessment tool, and the CRANAPL assessment rubric. 8,11,[18][19][20][21][22] However, these note rating instruments possess varying degrees of detailed evaluation of clinical reasoning. Of these note rating instruments, the IDEA assessment tool developed by Baker et al includes a robust assessment of clinical reasoning documentation.…”
Section: Introductionmentioning
confidence: 99%
“…In our own PICU, we will start by relaying our findings to staff at local venues such as patient safety meetings, discussing cases involving opportunities to improve diagnosis documentation at our morbidity and mortality conference, and working with our institution’s clinical documentation program to ensure that expectations for documentation address both reimbursement goals and the need to communicate diagnoses effectively. Future strategies to improve diagnosis documentation may include teaching clinicians how to write organized and well-supported diagnosis narratives which acknowledge uncertainty and identify remaining unknowns (8), using dictation and transcribing technology judiciously to create concise but complete notes, implementing documentation reminders and templates (22), formally assessing the quality of clinical reasoning discerned from narrative notes (25), and implementing peer-to-peer audit and feedback regarding the clarity and utility of documentation (22).…”
Section: Discussionmentioning
confidence: 99%
“…Furthermore, we were able to model the instrument on the basis of several previously validated instruments that assess reasoning through written notes (interpretative summary, differential diagnosis, explanation of reasoning, and alternatives, Clinical Reasoning in Admission Note Assessment and Plan, and Diagnostic Justification). [22][23][24] The simulated encounter was accessed via a website created with WordPress and hosted by Temple University (https://sites.temple.edu/rene/case-2/; not publicly searchable). It included a 2-minute video of a standardized patient in an emergency department describing the history of her present illness with text captions.…”
Section: Instrument Development and Study Designmentioning
confidence: 99%