Outlying observations are undesirable but possible elements of geodetic measurements. In such a context, the primary and trivial solution is to repeat “suspected” observations. The question arises: what if the measurements cannot be performed again, or if one cannot flag outliers easily and efficiently? In such a case, one should process data by applying methods that consider the possible occurrence of outlying observations. Historically, except for some previous attempts, the statistical approach to robust estimation originates in the 60s of the 20th century and refers to the pioneer papers of Huber, Tukey, Hampel, Hodges, and Lehmann. Also, the statistical procedures known as data snooping (data dredging) were developed at a similar time. It took not a long time before robust procedures were implemented for processing geodetic observations or adjustment of observation systems. The first works of Baarda and Pope encouraged other scientists or surveyors to elaborate robust procedures adapted for geodetic or surveying problems, which resulted in their rapid development in the last two decades of the 20th century. The question for the 21st century is whether robustness is still an important issue relating to modern measurement technologies and numerical data processing. One should realize that modern geodetic techniques do not decrease the probability of outlier occurrence. Considering measurement systems that yield big data, it is almost certain that outliers occur somewhere. The paper reviews different approaches to robust processing of geodetic observations, from the data snooping methods, random sampling, M-estimation, R-estimation, and Msplit estimation to robust estimation of the variance coefficient. Such a variety reflects different natures, origins, or properties of outliers and the apparent fact that there is no best and most efficient and universal robust approach. The methods presented are indeed the basis for future solutions based on, e.g., machine learning.