2014
DOI: 10.1111/bju.12559
|View full text |Cite
|
Sign up to set email alerts
|

Face, content, construct and concurrent validity of dry laboratory exercises for robotic training using a global assessment tool

Abstract: Objectives To evaluate robotic dry laboratory (dry lab) exercises in terms of their face, content, construct and concurrent validities. To evaluate the applicability of the Global Evaluative Assessment of Robotic Skills (GEARS) tool to assess dry lab performance. Materials and Methods Participants were prospectively categorized into two groups: robotic novice (no cases as primary surgeon) and robotic expert (≥30 cases). Participants completed three virtual reality (VR) exercises using the da Vinci Skills Simul… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
46
0
1

Year Published

2015
2015
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 77 publications
(48 citation statements)
references
References 14 publications
1
46
0
1
Order By: Relevance
“…In validation studies, participants were recruited from the following specialties: urology (n = 15 studies) [10][11][12][13]15,16,21,[25][26][27]29,30,32,36,37]; gynecology (n = 1) [19]; urology and general surgery (n = 1) [24]; gynecology and general surgery (n = 1) [38]; urology, gynecology, and general surgery (n = 3) [20,22,31]; urology, gynecology, and cardiothoracic surgery (n = 1) [14]; and urology, otorhinolaryngology, cardiology, thoracic surgery, and gynecology (n = 1) [23]; in six studies, the specialty was not indicated [17,18,28,[33][34][35]. In skills transfer studies, participants were from gynecology (n = 3) [42][43][44]; urology (n = 2) [13,27]; general surgery (n = 1) [40]; and urology and gynecology (n = 1) [41]; in two studies the specialty was not reported [39,45].…”
Section: Surgical Specialties Of Participantsmentioning
confidence: 99%
“…In validation studies, participants were recruited from the following specialties: urology (n = 15 studies) [10][11][12][13]15,16,21,[25][26][27]29,30,32,36,37]; gynecology (n = 1) [19]; urology and general surgery (n = 1) [24]; gynecology and general surgery (n = 1) [38]; urology, gynecology, and general surgery (n = 3) [20,22,31]; urology, gynecology, and cardiothoracic surgery (n = 1) [14]; and urology, otorhinolaryngology, cardiology, thoracic surgery, and gynecology (n = 1) [23]; in six studies, the specialty was not indicated [17,18,28,[33][34][35]. In skills transfer studies, participants were from gynecology (n = 3) [42][43][44]; urology (n = 2) [13,27]; general surgery (n = 1) [40]; and urology and gynecology (n = 1) [41]; in two studies the specialty was not reported [39,45].…”
Section: Surgical Specialties Of Participantsmentioning
confidence: 99%
“…Among the benchmarks to assess validity, there are the concepts of face, content, construct, and criterion [36]. Face validity or realism describes the extent to which the test simulates the condition in the real world.…”
Section: Simulation Designmentioning
confidence: 99%
“…Face validity or realism describes the extent to which the test simulates the condition in the real world. Content validity represents the extent to which the measurement reflects the attributes it purports to measure [36]. The minimum requirement for simulator approval is construct validity, which describes objectively if the test is measuring the construct it claims to be measuring.…”
Section: Simulation Designmentioning
confidence: 99%
See 1 more Smart Citation
“…Despite this rapid surge in use, the development and validation of training methods has failed to keep pace. Several training platforms have undergone validation testing, including inanimate tasks [1][2][3][4][5], virtual reality exercises [6][7][8][9] and ex vivo models [5,9,10]. Validation work in these studies includes demonstration of face (realism of training tool), content (usefulness as training tool), construct (ability to distinguish between different skill levels) and cross-method (correlation across training methods) validity.…”
Section: Introductionmentioning
confidence: 99%