Objective
This study explored the use of electroencephalogram (EEG) and eye gaze features, experience-related features, and machine learning to evaluate performance and learning rates in fundamentals of laparoscopic surgery (FLS) and robotic-assisted surgery (RAS).
Methods
EEG and eye-tracking data were collected from 25 participants performing three FLS and 22 participants performing two RAS tasks. Generalized linear mixed models, using L1-penalized estimation, were developed to objectify performance evaluation using EEG and eye gaze features, and linear models were developed to objectify learning rate evaluation using these features and performance scores at the first attempt. Experience metrics were added to evaluate their role in learning robotic surgery. The differences in performance across experience levels were tested using analysis of variance.
Results
EEG and eye gaze features and experience-related features were important for evaluating performance in FLS and RAS tasks with reasonable results. Residents outperformed faculty in FLS peg transfer (p value = 0.04), while faculty and residents both excelled over pre-medical students in the FLS pattern cut (p value = 0.01 and p value < 0.001, respectively). Fellows outperformed pre-medical students in FLS suturing (p value = 0.01). In RAS tasks, both faculty and fellows surpassed pre-medical students (p values for the RAS pattern cut were 0.001 for faculty and 0.003 for fellows, while for RAS tissue dissection, the p value was less than 0.001 for both groups), with residents also showing superior skills in tissue dissection (p value = 0.03).
Conclusion
Findings could be used to develop training interventions for improving surgical skills and have implications for understanding motor learning and designing interventions to enhance learning outcomes.
Graphical abstract