2022
DOI: 10.1109/tgrs.2021.3073923
|View full text |Cite
|
Sign up to set email alerts
|

MSACon: Mining Spatial Attention-Based Contextual Information for Road Extraction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
9

Relationship

1
8

Authors

Journals

citations
Cited by 21 publications
(12 citation statements)
references
References 63 publications
0
12
0
Order By: Relevance
“…The lack of long-distance context information will directly lead to the discontinuity of road extraction results or even the phenomenon that roads cannot be extracted completely. In order to connect discontinuous broken roads, many researchers have considered various schemes to capture long-distance context information to model the topological relationship between broken roads [29]. The main method is to use atrous convolution [30].…”
Section: Type Of Deep Learningmentioning
confidence: 99%
“…The lack of long-distance context information will directly lead to the discontinuity of road extraction results or even the phenomenon that roads cannot be extracted completely. In order to connect discontinuous broken roads, many researchers have considered various schemes to capture long-distance context information to model the topological relationship between broken roads [29]. The main method is to use atrous convolution [30].…”
Section: Type Of Deep Learningmentioning
confidence: 99%
“…Y.X. Xu et al proposed a spatial attention-based road extraction network and employed signed distance between roads and buildings to enhance the extraction accuracy for the potential roads around the thorny occlusion areas of remote sensing images [25]. Z. Chen et al modified the architecture of U-Net, and designed an asymmetric encoder-decoder network for road extraction from remote sensing images [26].…”
Section: Road Extractionmentioning
confidence: 99%
“…The weighted sum of the feature statistics was used as the bias estimation of the color cast in the areas with weak semantic correspondences, and the weight for each cluster was measured by its valid pixels, as detailed below. (11) where πœ‡ 𝑐 𝑖 and 𝜎 𝑐 𝑖 represent the mean and variance of the 𝑖 π‘‘β„Ž cluster in the feature space, and H and W denote the height and width of the feature maps, respectively. The symbol of = denotes the numerical equality, and the Boolean operator (𝑀 𝑐 π‘Žπ‘ = 𝑖) indicates that the statistics are computed for each cluster.…”
Section: Weight-adjusted Adainmentioning
confidence: 99%
“…The color transfer is only conducted between the regions with the same semantic category [9]. The attentional color transfer method usually computes the normalized cross correlation between the representation of the image pair, and reassembles the deep features according to the cross correlation for the image synthesis [10,11]. In conventional color transfer methods, the mean, standard deviation, or other statistics measures are often employed in the second step to design a linear or nonlinear transform function [12].…”
Section: Introductionmentioning
confidence: 99%