Aerial mapping provides high-resolution spatial data over large areas and is a cornerstone in urban planning, environmental monitoring and disaster management. To ensure geo-referencing and orthorectification, accurate external camera orientations (EO) are pivotal and indispensable. Often, the position and orientation data provided by the aerial platform is not accurate enough to create accurately georeferenced imagery. With our method, we improve the EO in a post-processing chain by referencing the extracted road network with the OpenStreetMap (OSM) road network. The dataset used in this paper was collected in the aftermath of the July 2021 Ahr Valley flooding disaster along the Ahr river in Germany, employing an aircraft operating at an altitude of 3000 feet. Encompassing an expansive area of 230 sq km, it includes the diverse landscapes of the Eifel mountains, urban and rural locales, as well as the inundated and devastated regions. We implement a machine learning semantic segmentation model, namely DeepLab V3+, utilizing the RGB imagery images captured by the aircraft to identify road center lines. It is necessary to employ this rather sophisticated method because traditional road detectors were thrown o↵ by the flooded areas, increasing false positive detections and limiting the EO optimization. We compute 3D reference points from available OSM road vector data combined with digital terrain model (DTM) elevation data. Using camera intrinsics and initial values for EOs, we project the reference points into the images and compute the distance to the detected road feature points. This distance is minimized to optimize EO parameters. Subsequently, we project the images onto an ortho-mosaic using the DTM, enhancing accuracy beyond that of the raw EO data measured by the aircraft. The resulting orthophoto map is almost free of visual artifacts and accurate in terms of geolocation, and can thus be utilized for disaster response applications.