When large vessels such as container ships are approaching their destination port, they are required by law to have a maritime pilot on board responsible for safely navigating the vessel to its desired location. The maritime pilot has extensive knowledge of the local area and how currents and tides affect the vessel’s navigation. In this work, we present a novel end-to-end solution for estimating time-to-collision time-to-collision (TTC) between moving objects (i.e., vessels), using real-time image streams from aerial drones in dynamic maritime environments. Our method relies on deep features, which are learned using realistic simulation data, for reliable and robust object detection, segmentation, and tracking. Furthermore, our method uses rotated bounding box representations, which are computed by taking advantage of pixel-level object segmentation for enhanced TTC estimation accuracy. We present collision estimates in an intuitive manner, as collision arrows that gradually change its color to red to indicate an imminent collision. A set of experiments in a realistic shipyard simulation environment demonstrate that our method can accurately, robustly, and quickly predict TTC between dynamic objects seen from a top-view, with a mean error and a standard deviation of 0.358 and 0.114 s, respectively, in a worst case scenario.