Objective. To evaluate interrater reliability of a universal evaluator rubric used to assess student pharmacist communication skills during patient education sessions. Methods. Six schools/colleges of pharmacy each submitted ten student videos of a simulated community pharmacy patient education session and recruited two raters in each of the five rater groups (faculty, standardized patients, postgraduate year one residents, student pharmacists, and pharmacy preceptors). Raters used a rubric containing 20 items and a global assessment to evaluate student communication of 12 videos. Agreement was computed for individual items and overall rubric score within each rater group, and for each item across all rater groups. Average overall rubric agreement scores were compared between rater groups. Agreement coefficient scores were categorized as no to minimal, weak, moderate, strong, or almost perfect agreement. Results. Fifty-five raters representing five rater groups and six schools/colleges of pharmacy evaluated student communication. Item agreement analysis for all raters revealed five items with no to minimal or weak agreement, 10 items with moderate agreement, one item with strong agreement, and five items with almost perfect agreement. Overall average agreement across all rater groups was 0.73 (95% CI 0.66-0.81). The preceptor rater group exhibited the lowest agreement score of 0.68 (95% CI 0.58-0.78), which significantly deviated from the overall average.
Conclusion.While strong or almost perfect agreement scores were not observed for all rubric items, overall average interrater reliability results support the use of this rubric in a variety of raters to assess student pharmacist communication skills during patient education sessions.