2018
DOI: 10.1016/j.surg.2017.10.029
|View full text |Cite
|
Sign up to set email alerts
|

Developing a robust suturing assessment: validity evidence for the intracorporeal suturing assessment tool

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
1
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 16 publications
0
1
1
Order By: Relevance
“…Further, the ESs only had the first 5 min of the videos to rate, and this could have given them a harder time giving an accurate rating even though they had received rater training before. This is not in accordance with a previous study, where ESs showed agreement for videos edited to the first part of the procedure [ 14 ]. The use of videos from a simulated environment instead of real-life surgeries made it possible to completely standardize the procedures and allowed all participants to perform in an independent and unsupervised (“real”) fashion.…”
Section: Discussioncontrasting
confidence: 96%
See 1 more Smart Citation
“…Further, the ESs only had the first 5 min of the videos to rate, and this could have given them a harder time giving an accurate rating even though they had received rater training before. This is not in accordance with a previous study, where ESs showed agreement for videos edited to the first part of the procedure [ 14 ]. The use of videos from a simulated environment instead of real-life surgeries made it possible to completely standardize the procedures and allowed all participants to perform in an independent and unsupervised (“real”) fashion.…”
Section: Discussioncontrasting
confidence: 96%
“…We used video recordings on three different modules: bladder neck dissection (BND), neurovascular bundle dissection (NVBD), and urethrovesical anastomosis (UVA), we edited videos to include the first 5 min to standardize them. We anticipated that the total time for video assessment for the CWs would be too long [ 14 ] as full-length videos were up to 43 min long. All CWs and ESs were blinded to the identity and skill level of the surgeon on the recorded video.…”
Section: Methodsmentioning
confidence: 99%
“…We showed identical inter-rater reliability for ES when assessing full-length videos and 5-minute videos. This is in line with Anton et al 7 who showed ES could assess videos of laparoscopic suturing of 30 seconds just as good as full-length videos. When we look at the ability of the ES to discriminate between the surgical skill levels of the surgeons, we find that they can distinguish well between novice surgeons and experienced surgeons and intermediate surgeons and experienced surgeons in both full-length videos and 5-minute videos (Fig.…”
Section: Discussionsupporting
confidence: 89%
“…[2][3][4][5][6] In order to reduce the time and effort needed for video assessments, shortened or segmented video clips could potentially be used instead of full-length videos. 7 Crowdsourced assessment is a relatively new trend that uses anonymous individuals to complete small, precise tasks as an alternative method for evaluation. 8,9 It has previously been used to solve a range of different problems, eg, decipher complex protein structure folding or help blind users find their mobile phones.…”
mentioning
confidence: 99%
See 1 more Smart Citation