In this work, the potentials of data-driven optimization for the well-known berth allocation problem are studied. The aim of robust berth scheduling is to derive conflict-free vessel assignments at the quay of a terminal, taking into account uncertainty regarding the actual vessel arrival times which may result from external influences as, e.g., cross wind and sea current. In order to achieve robustness, four different Machine Learning methods-from linear regression to an artificial neural network-are employed for vessel arrival time prediction in this work. The different Machine Learning methods are analysed and evaluated with respect to their forecast quality. The calculation and use of so-called dynamic time buffers (DTBs), which are derived from the different AIS-based forecasts and whose length depends on the estimated forecast reliability, in the berth scheduling model enhance the robustness of the resulting schedules considerably, as is shown in an extensive numerical study. Furthermore, the results show that also rather simple Machine Learning approaches are able to reach high forecast accuracy. The optimization model does not only lead to more robust solutions, but also to less actual waiting times for the vessels and hence to an enhanced service quality, as can be shown by studying the resulting schedules for real vessel data. Moreover, it turns out that the accuracy of the resulting berthing schedules, measured as the deviation of planned and actually realisable schedules, exceeds the accuracy of all forecasts which underlines the usefulness of the DTB approach.