2015
DOI: 10.1007/s00464-015-4070-8
|View full text |Cite
|
Sign up to set email alerts
|

External validation of Global Evaluative Assessment of Robotic Skills (GEARS)

Abstract: In an independent cohort, GEARS was able to differentiate between different robotic skill levels, demonstrating excellent construct validity. As a standardized assessment tool, GEARS maintained consistency and reliability for an in vivo robotic surgical task and may be applied for skills evaluation in a broad range of robotic procedures.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
67
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
8

Relationship

2
6

Authors

Journals

citations
Cited by 103 publications
(67 citation statements)
references
References 13 publications
0
67
0
Order By: Relevance
“…Curricula are being developed for use with the robotic surgical simulators that aim to train utilizers to proficiency standards [40,41]. In parallel, global evaluative assessment tools to specifically rate robotic surgical skill are being developed [42]. As these systems become more developed and utilized, perhaps the robotic revolution will follow the widespread adaptation of laparoscopy and training will take place during residency and fellowship training, obviating the need for mid-or late-career skill acquisition.…”
Section: Discussionmentioning
confidence: 99%
“…Curricula are being developed for use with the robotic surgical simulators that aim to train utilizers to proficiency standards [40,41]. In parallel, global evaluative assessment tools to specifically rate robotic surgical skill are being developed [42]. As these systems become more developed and utilized, perhaps the robotic revolution will follow the widespread adaptation of laparoscopy and training will take place during residency and fellowship training, obviating the need for mid-or late-career skill acquisition.…”
Section: Discussionmentioning
confidence: 99%
“…To highlight the range of participant experience levels, demographic data were stratified into expert and trainee groups and the data compared across groups using the Mann-Whitney U-test for categorical data and the chi-squared/Fisher's exact test for nominal data. To remain consistent with previous studies, experts were defined as participants with >30 robotic cases as primary surgeon [10,12,13]. Experience level stratification was not statistically necessary for the primary outcomes of the study, but was carried out to allow contemporary comparison.…”
Section: Discussionmentioning
confidence: 99%
“…Despite this rapid surge in use, the development and validation of training methods has failed to keep pace. Several training platforms have undergone validation testing, including inanimate tasks [1][2][3][4][5], virtual reality exercises [6][7][8][9] and ex vivo models [5,9,10]. Validation work in these studies includes demonstration of face (realism of training tool), content (usefulness as training tool), construct (ability to distinguish between different skill levels) and cross-method (correlation across training methods) validity.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…These cutoffs were selected based on external GEARS validation data demonstrating performance differences at this level. 11 Face validity was evaluated based on responses of all participants to questions 1 and 2 of the exit questionnaire. Content validity was derived from only experts' responses to questions 3-5.…”
Section: Discussionmentioning
confidence: 99%