Background: Lack of anatomy recognition represents a clinically relevant risk factor in abdominal surgery. While machine learning methods have the potential to aid in recognition of visible patterns and structures, limited availability and diversity of (annotated) laparoscopic image data restrict the clinical potential of such applications in practice. This study explores the potential of machine learning algorithms to identify and delineate abdominal organs and anatomical structures using a robust and comprehensive dataset, and compares algorithm performance to that of humans. Methods: Based on the Dresden Surgical Anatomy Dataset providing 13195 laparoscopic images with pixel-wise segmentations of eleven anatomical structures, two machine learning algorithms were developed: individual segmentation algorithms for each structure, and a combined algorithm with a common encoder and structure-specific decoders. Performance was assessed using F1 score, Intersection-over-Union (IoU), precision, recall, and specificity. Using the example of pancreas segmentation on a sample dataset of 35 images, algorithm performance was compared to that of a cohort of 28 physicians, medical students, and medical laypersons. Results: Mean IoU for segmentation of intraabdominal structures ranged from 0.28 to 0.83 and from 0.32 to 0.81 for the structure-specific and the combined semantic segmentation model, respectively. Average inference for the structure-specific (one anatomical structure) and the combined model (eleven anatomical structures) took 20 ms and 54 ms, respectively. The structure-specific model performed equal to or better than 27 out of 28 human participants in pancreas segmentation. Conclusions: Machine learning methods have the potential to provide relevant assistance in anatomy recognition in minimally-invasive surgery in near-real-time. Future research should investigate the educational value and subsequent clinical impact of respective assistance systems.