Background: Objective force-and motion-based assessment is currently lacking in laparoscopic skills curricula. This study aimed to evaluate the added value of parameter-based assessment and feedback during training. Methods: Laparoscopy-naïve surgical residents that took part in a 3-week skills training curriculum were included. A box trainer equipped with the ForceSense system was used for assessment of tissue manipulation-(MaxForce) and instrument-handling skills (Path length and Time). Learning curves were established using linear regression tests. Pre-and post-course comparisons indicated the overall progression and were compared to predefined proficiency levels. A post-course survey was carried out to assess face validity. Results: In total, 4,268 trials, executed by 24 residents, were successfully assessed. Median (interquartile range) MaxForce outcomes improved from 2.7 Newton (interquartile range 1.9e3.8) to 1.8 Newton (interquartile range 1.2e2.4) between pre-and post-course assessment (P .009). Instrument Path length improved from 7,102.2 mm (interquartile range 5,255.2e9,025.9) to 3,545.3 mm (interquartile range 2,842.9e4,563.2) (P .001). Time to execute the task improved from 159.8 seconds (interquartile range 119.8e219.0) to 60.7 seconds (interquartile range 46.0e79.5) (P .001). The learning curves revealed during what training phase the proficiency benchmarks were reached for each trainee. In the survey outcomes, trainees indicated that this curriculum should be part of a surgical residency program (mean visual analog scale score of 9.2 ± 0.9 standard deviation). Conclusion: Force-, motion-, and time-parameters can be objectively measured during basic laparoscopic skills curricula and do indicate progression of skills over time. The ForceSense parameters enable curricula to be designed for specific proficiency-based training goals and offer the possibility for objective classification of the levels of expertise.
Background Laparoscopy has reduced tactile and visual feedback compared to open surgery. There is increasing evidence that visual and haptic information converge to form a more robust mental representation of an object. We investigated whether tactile exploration of an object prior to executing a laparoscopic action on it improves performance. Methods A prospective cohort study with 20 medical students randomized in two different groups was conducted. A silicone ileocecal model, on which a laparoscopic action had to be performed, was used inside an outside a ForceSense box trainer. During the pre-test, students either did a combined manual and visual exploration or only visual exploration of the caecum model. To track performance during the trials of the study we used force, motion and time parameters as representatives of technical skills development. The final trial data were used for statistical comparison between groups. Results All included time and motion parameters did not show any clear differences between groups. However, the force parameters Mean force non-zero (p = 004), Maximal force (p = 0.01) Maximal impulse (p = 0.02), Force volume (p = 0.02) and SD force (p = 0.01) showed significant lower values in favour of the tactile exploration group for the final trials. Conclusions By adding haptic sensation to the existing visual information during training of laparoscopic tasks on lifelike models, tissue manipulation skills improve during training.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.