2021
DOI: 10.1007/s11606-021-06805-6
|View full text |Cite
|
Sign up to set email alerts
|

Development of a Clinical Reasoning Documentation Assessment Tool for Resident and Fellow Admission Notes: a Shared Mental Model for Feedback

Abstract: BACKGROUND: Residents and fellows receive little feedback on their clinical reasoning documentation. Barriers include lack of a shared mental model and variability in the reliability and validity of existing assessment tools. Of the existing tools, the IDEA assessment tool includes a robust assessment of clinical reasoning documentation focusing on four elements (interpretive summary, differential diagnosis, explanation of reasoning for lead and alternative diagnoses) but lacks descriptive anchors threatening … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 17 publications
(10 citation statements)
references
References 29 publications
0
10
0
Order By: Relevance
“…Physicians' vignette responses may differ from clinical practice; however, physician R-IDEA scores in clinical documentation were lower than physician scores in this study. 6 We also used a zero-shot approach for chatbot's prompt.…”
Section: Supplemental Contentmentioning
confidence: 99%
See 2 more Smart Citations
“…Physicians' vignette responses may differ from clinical practice; however, physician R-IDEA scores in clinical documentation were lower than physician scores in this study. 6 We also used a zero-shot approach for chatbot's prompt.…”
Section: Supplemental Contentmentioning
confidence: 99%
“…Primary outcome was the Revised-IDEA (R-IDEA) score, a validated 10-point scale evaluating 4 core domains of clinical reasoning documentation (eTable 3 in Supplement 1). 6 To establish reliability, we (D.R., Z.K., A.R.) independently scored 29 section responses from 8 nonparticipants, showing substantial scoring agreement (mean Cohen weighted κ = 0.61).…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…23 For instance, models like Schaye et al's ML model for automated assessment of resident clinical reasoning documentation are examples of supervised ML that use text-based labeled datasets. 24 Such models help overcome traditional barriers in medical education assessment by providing a sufficient number of assessment inputs and consistency in standards of assessment. 25 This automated assessment of clinical reasoning documentation helps overcome the barriers in medical education assessment of obtaining sufficient number of assessment inputs and consistency in standards of assessment.…”
Section: Proactive Data Collectionmentioning
confidence: 99%
“…AI is increasingly being used in the assessment of physician competence across various levels of learners (undergraduate medical education [UME], graduate medical education [GME], and continuing medical education [CME]), competency domains (e.g., medical knowledge and patient care), and different types of data input (e.g., text vs video) and AI technologies (e.g., supervised vs unsupervised ML) 23 . For instance, models like Schaye et al’s ML model for automated assessment of resident clinical reasoning documentation are examples of supervised ML that use text-based labeled datasets 24 . Such models help overcome traditional barriers in medical education assessment by providing a sufficient number of assessment inputs and consistency in standards of assessment 25 .…”
Section: Use Of Ai In Precision Educationmentioning
confidence: 99%