Objectives
To evaluate robotic dry laboratory (dry lab) exercises in terms of their face, content, construct and concurrent validities.
To evaluate the applicability of the Global Evaluative Assessment of Robotic Skills (GEARS) tool to assess dry lab performance.
Materials and Methods
Participants were prospectively categorized into two groups: robotic novice (no cases as primary surgeon) and robotic expert (≥30 cases).
Participants completed three virtual reality (VR) exercises using the da Vinci Skills Simulator (Intuitive Surgical, Sunnyvale, CA, USA), as well as corresponding dry lab versions of each exercise (Mimic Technologies, Seattle, WA, USA) on the da Vinci Surgical System.
Simulator performance was assessed by metrics measured on the simulator. Dry lab performance was blindly video‐evaluated by expert review using the six‐metric GEARS tool.
Participants completed a post‐study questionnaire (to evaluate face and content validity).
A Wilcoxon non‐parametric test was used to compare performance between groups (construct validity) and Spearman's correlation coefficient was used to assess simulation to dry lab performance (concurrent validity).
Results
The mean number of robotic cases experienced for novices was 0 and for experts the mean (range) was 200 (30–2000) cases.
Expert surgeons found the dry lab exercises both ‘realistic’ (median [range] score 8 [4–10] out of 10) and ‘very useful’ for training of residents (median [range] score 9 [5–10] out of 10).
Overall, expert surgeons completed all dry lab tasks more efficiently (P < 0.001) and effectively (GEARS total score P < 0.001) than novices. In addition, experts outperformed novices in each individual GEARS metric (P < 0.001).
Finally, in comparing dry lab with simulator performance, there was a moderate correlation overall (r = 0.54, P < 0.001). Most simulator metrics correlated moderately to strongly with corresponding GEARS metrics (r = 0.54, P < 0.001).
Conclusions
The robotic dry lab exercises in the present study have face, content, construct and concurrent validity with the corresponding VR tasks.
Until now, the assessment of dry lab exercises has been limited to basic metrics (i.e. time to completion and error avoidance). For the first time, we have shown it is feasibile to apply a global assessment tool (GEARS) to dry lab training.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.