Data segmentation a.k.a. multiple change point analysis has received considerable attention due to its importance in time series analysis and signal processing, with applications in a variety of fields including natural and social sciences, medicine, engineering and finance.The goal of this survey article is twofold: In the first part, we review the existing literature on the canonical data segmentation problem which aims at detecting and localising multiple change points in the mean of univariate time series. We provide an overview of popular methodologies on their computational complexity and theoretical properties. In particular, our theoretical discussion focuses on the separation rate relating to which change points are detectable by a given procedure, and the localisation rate quantifying the precision of corresponding change point estimators, and we distinguish between whether a homogeneous or multiscale viewpoint has been adopted in their derivation. We further highlight that the latter viewpoint provides the most general setting for investigating the optimality of data segmentation algorithms.Arguably, the canonical segmentation problem has been the most popular framework to propose new data segmentation algorithms and study their efficiency in the last decades. In the second part of this survey, we motivate the importance of attaining an in-depth understanding of strengths and weaknesses of methodologies for the change point problem in a simpler, univariate setting, as a stepping stone for the development of methodologies for more complex problems. We illustrate this with a range of examples showcasing the connections between complex distributional changes and those in the mean. We also discuss extensions towards high-dimensional change point problems where we demonstrate that the challenges arising from high dimensionality are orthogonal to those in dealing with multiple change points.