Cluster separation is a task typically tackled by widely used clustering techniques, such as k‐means or DBSCAN. However, these algorithms are based on non‐perceptual metrics, and our experiments demonstrate that their output does not reflect human cluster perception. To bridge the gap between human cluster perception and machine‐computed clusters, we propose HPSCAN, a learning strategy that operates directly on scattered data. To learn perceptual cluster separation on such data, we crowdsourced the labeling of bivariate (scatterplot) datasets to 384 human participants. We train our HPSCAN model on these human‐annotated data. Instead of rendering these data as scatterplot images, we used their x and y point coordinates as input to a modified PointNet++ architecture, enabling direct inference on point clouds. In this work, we provide details on how we collected our dataset, report statistics of the resulting annotations, and investigate the perceptual agreement of cluster separation for real‐world data. We also report the training and evaluation protocol for HPSCAN and introduce a novel metric, that measures the accuracy between a clustering technique and a group of human annotators. We explore predicting point‐wise human agreement to detect ambiguities. Finally, we compare our approach to 10 established clustering techniques and demonstrate that HPSCAN is capable of generalizing to unseen and out‐of‐scope data.