2014
DOI: 10.1186/s12909-014-0279-9
|View full text |Cite
|
Sign up to set email alerts
|

Reliability of the Interprofessional Collaborator Assessment Rubric (ICAR) in Multi Source Feedback (MSF) with post-graduate medical residents

Abstract: BackgroundIncreased attention on collaboration and teamwork competency development in medical education has raised the need for valid and reliable approaches to the assessment of collaboration competencies in post-graduate medical education. The purpose of this study was to evaluate the reliability of a modified Interprofessional Collaborator Assessment Rubric (ICAR) in a multi-source feedback (MSF) process for assessing post-graduate medical residents’ collaborator competencies.MethodsPost-graduate medical re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
30
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 38 publications
(35 citation statements)
references
References 31 publications
1
30
0
Order By: Relevance
“…14 We therefore created a behaviorally defined assessment tool, based on Interprofessional Education Collaborative domains and competencies, and tailored to the routine physician-nurse interactions that occur frequently as part of primary care practice for common chronic (e.g., diabetes, hypertension) and urgent conditions. Additional tools focused on assessing performance have since been developed (see, for example, Interprofessional Collaborator Assessment Rubric 15 and the Performance Assessment for Communication and Teamwork 16 ), and while these share many of the core skills and domains we identified from the literature, they also focus on the team and not individuals, including domains such as situational monitoring, team goals and use of protocols/ checklists not relevant to PC scenarios, and tend to use judgment-based scoring options (e.g., below expectations or excellent). Our OSCE assessments are designed to help Standardized Patients rate consistently and accurately by using case-specific behavioral descriptors (observable actions) as the basis for selecting scoring options.…”
mentioning
confidence: 99%
“…14 We therefore created a behaviorally defined assessment tool, based on Interprofessional Education Collaborative domains and competencies, and tailored to the routine physician-nurse interactions that occur frequently as part of primary care practice for common chronic (e.g., diabetes, hypertension) and urgent conditions. Additional tools focused on assessing performance have since been developed (see, for example, Interprofessional Collaborator Assessment Rubric 15 and the Performance Assessment for Communication and Teamwork 16 ), and while these share many of the core skills and domains we identified from the literature, they also focus on the team and not individuals, including domains such as situational monitoring, team goals and use of protocols/ checklists not relevant to PC scenarios, and tend to use judgment-based scoring options (e.g., below expectations or excellent). Our OSCE assessments are designed to help Standardized Patients rate consistently and accurately by using case-specific behavioral descriptors (observable actions) as the basis for selecting scoring options.…”
mentioning
confidence: 99%
“…Ratings may also have been influenced by how well reviewers knew those being assessed, sex differences, and differences in radiologists' and nonradiologists' personal or professional expectations of radiologists. Other studies conducted with multisource feedback in medical education, including within radiology, have shown significant differences in competency evaluations between assessor groups [7,21,22]. Nonetheless, our use of disparate representative stakeholders as reviewers and raters was consonant with beliefs that interdisciplinary collaboration is optimal for assessing competency in communication and interpersonal skills and that those with expertise in humanistic and psychosocial aspects of health care bring uniquely valuable insights into the evaluation process [11,15,22,23].…”
Section: Discussionmentioning
confidence: 71%
“…Other studies conducted with multisource feedback in medical education, including within radiology, have shown significant differences in competency evaluations between assessor groups [7,21,22]. Nonetheless, our use of disparate representative stakeholders as reviewers and raters was consonant with beliefs that interdisciplinary collaboration is optimal for assessing competency in communication and interpersonal skills and that those with expertise in humanistic and psychosocial aspects of health care bring uniquely valuable insights into the evaluation process [11,15,22,23]. Such an interdisciplinary approach may be particularly important for assessing communication and relational skills in radiology, in which no established standards for excellence exist, few validated communication skills programs have been developed, and faculty development remains largely ad hoc.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Multi-source evaluation -A supervisor-, subordinate-and self-appraisal -was used in this study. Available literature found potential value in the use of a 360-degree or multi-source evaluation particularly among nurses [26] and post-graduate medical residents [27]. Variance in the ratings from a multisource appraisal cannot be attributed to rater groups since the same underlying constructs are being measured in the study [28].…”
Section: Research Instrumentmentioning
confidence: 99%