In recent decades, road extraction from very high-resolution (VHR) remote sensing images has become popular and has attracted extensive research efforts. However, the very high spatial resolution, complex urban structure, and contextual background effect of road images complicate the process of road extraction. For example, shadows, vehicles, or other objects may occlude a road located in a developed urban area. To address the problem of occlusion, this study proposes a semiautomatic approach for road extraction from VHR remote sensing images. First, guided image filtering is employed to reduce the negative effects of nonroad pixels while preserving edge smoothness. Then, an edge-constraint-based weighted fusion model is adopted to trace and refine the road centerline. An edge-constraint fast marching method, which sequentially links discrete seed points, is presented to maintain road-point connectivity. Six experiments with eight VHR remote sensing images (spatial resolution of 0.3 m/pixel to 2 m/pixel) are conducted to evaluate the efficiency and robustness of the proposed approach. Compared with state-of-the-art methods, the proposed approach presents superior extraction quality, time consumption, and seed-point requirements.
Automatic extraction of road from multi-source remote sensing data has always been a challenging task. Factors such as shadow occlusion and multi-source data alignment errors prevent current deep learning-based road extraction methods from acquiring road features with high complementarity, redundancy, and crossover. Unlike previous works that capture contexts by multi-scale feature fusion, we propose a dual attention dilated-LinkNet (DAD-LinkNet) to adaptively integrate local road features with their global dependencies by joint using satellite image and floating vehicle trajectory data. Firstly, a joint leastsquares feature matching-based floating vehicle trajectory correction model is used to correct the floating vehicle trajectory; then a convolutional network model DAD-LinkNet based on a dual-attention mechanism is proposed, and road features are extracted from the channel domain and spatial domain of the target image in turn by constructing a dual-attention module in the dilated convolutional layer and adopting a cascade connection; a weighted hyperparameter loss function is used as the loss function of the model; finally, the road extraction is completed based on the proposed DAD-LinkNet model. Experiments on three datasets show that the proposed DAD-LinkNet model outperforms the state-of-the-art methods in terms of accuracy and connectivity.
Seed point based road extraction methods are vital for extracting road networks from satellite images. Despite its effectiveness, roads in very high-resolution (VHR) satellite images are complicated, such as road occlusion and material change. To tackle this issue, this paper proposes to use the colour space transformation and geodesic method. First, the test image is converted from Red-Green-Blue colour space to Hue-Saturation-Value colour space to reduce the material change influence. The geodesic method is subsequently applied to extract initial road segments that link road seed points provided by users. At last, the initial result is adjusted by a kernel density estimation method to produce centred roads.The presented method is quantitatively evaluated on three test images. Experiments show that the proposed method yields a substantial improvement over cutting-edge technologies.The findings in this study shine new light on a practical solution for road extraction from satellite images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.