2018
DOI: 10.1503/cjs.015917
|View full text |Cite
|
Sign up to set email alerts
|

Effect of rater training on the reliability of technical skill assessments: a randomized controlled trial

Abstract: Background: Rater training improves the reliability of observational assessment tools but has not been well studied for technical skills. This study assessed whether rater training could improve the reliability of technical skill assessment. Methods: Academic and community surgeons in Royal College of Physicians and Surgeons of Canada surgical subspecialties were randomly allocated to either rater training (7-minute video incorporating frame-of-reference training elements) or no training. Participants then ass… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 13 publications
(10 citation statements)
references
References 34 publications
0
10
0
Order By: Relevance
“…Previous studies have demonstrated the importance of rater training and experience with assessment tools in improving interrater reliability. 25 , 26 , 27 Because future iterations of the Bootcamp employ the median sternotomy training model and procedure checklist, the interrater reliability can be improved with training of raters and improved iterations of the checklist. Additionally, analysis of checklist items with recurrent poor interrater reliability and discussion with raters to ascertain sources of discrepancy may help elucidate whether rater training is sufficient, or whether modifications to the checklist are needed.…”
Section: Discussionmentioning
confidence: 99%
“…Previous studies have demonstrated the importance of rater training and experience with assessment tools in improving interrater reliability. 25 , 26 , 27 Because future iterations of the Bootcamp employ the median sternotomy training model and procedure checklist, the interrater reliability can be improved with training of raters and improved iterations of the checklist. Additionally, analysis of checklist items with recurrent poor interrater reliability and discussion with raters to ascertain sources of discrepancy may help elucidate whether rater training is sufficient, or whether modifications to the checklist are needed.…”
Section: Discussionmentioning
confidence: 99%
“…The VAS score is typically used to assess pain or anxiety among patients, but it can be also used for other purposes, e.g., among residents for assessing their own management in a special kind of anesthesia [21], and in assessing the overall quality of patient sign-out from the emergency department [22]. For trainees' surgical skills, the VAS score has been used to evaluate suturing and knot tying skills [23], showing the VAS score and the OSATS for global rating skills 'good' for educational purposes with interrater reliability (IRR) 0.71 in a group where assessors were trained to the use of scales. The IRR was slightly lower, though VAS scores correlated well with the combined OSATS score, and the scale was easy to use.…”
Section: Discussionmentioning
confidence: 99%
“…Data collection was performed in a controlled and standardised environment using the same investigator who also acted as the nurse in all procedures. Furthermore, meticulous rater training was performed to improve reliability (21,22).…”
Section: Validity Of the Assessment Toolmentioning
confidence: 99%