The accurate recognition of surgical instruments is essential for the advancement of intraoperative artificial intelligence (AI) systems. In this study, we assessed the YOLOv8 model’s efficacy in identifying robotic and laparoscopic instruments in robot-assisted abdominal surgeries. Specifically, we evaluated its ability to detect, classify, and segment seven different types of surgical instruments. A diverse dataset was compiled from four public and private sources, encompassing over 7,400 frames and 17,175 annotations that represent a variety of surgical contexts and instruments. YOLOv8 was trained and tested on these datasets, achieving a mean average precision of 0.77 for binary detection and 0.72 for multi-instrument classification. Optimal performance was observed when the training set of a specific instrument reached 1300 instances. The model also demonstrated excellent segmentation accuracy, achieving a mean Dice score of 0.91 and a mean intersection over union of 0.86, with Monopolar Curved Scissors yielding the highest accuracy. Notably, YOLOv8 exhibited superior recognition performance for robotic instruments compared to laparoscopic tools, a difference likely attributed to the greater representation of robotic instruments in the training set. Furthermore, the model's rapid inference speed of 1.12 milliseconds per frame highlights its suitability for real-time clinical applications. These findings confirm YOLOv8’s potential for precise and efficient recognition of surgical instruments using a comprehensive multi-source dataset.