2020
DOI: 10.3390/rs12233983
|View full text |Cite
|
Sign up to set email alerts
|

Building Extraction from High Spatial Resolution Remote Sensing Images via Multiscale-Aware and Segmentation-Prior Conditional Random Fields

Abstract: Building extraction is a binary classification task that separates the building area from the background in remote sensing images. The conditional random field (CRF) is directly modelled by the maximum posterior probability, which can make full use of the spatial neighbourhood information of both labelled and observed images. CRF is widely used in building footprint extraction. However, edge oversmoothing still exists when CRF is directly used to extract buildings from high spatial resolution (HSR) remote sens… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
21
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 52 publications
(21 citation statements)
references
References 43 publications
0
21
0
Order By: Relevance
“…Recently, there are many papers to improve the building extraction performance by focusing on network architecture designing. Structures such as deep and shallow feature fusion [8,38,[46][47][48][49][50][51][52][53][54][55], multiple receptive field [5,12,48,51,[54][55][56][57], residual connection [1,11,47,51,52,[57][58][59] have been widely used in building extraction. MAP-Net [46] alleviates the scale problem by capturing spatial localization-preserved multi-scale features through a multi-parallel path design.…”
Section: Cnn-based Methods For Building Extractionmentioning
confidence: 99%
“…Recently, there are many papers to improve the building extraction performance by focusing on network architecture designing. Structures such as deep and shallow feature fusion [8,38,[46][47][48][49][50][51][52][53][54][55], multiple receptive field [5,12,48,51,[54][55][56][57], residual connection [1,11,47,51,52,[57][58][59] have been widely used in building extraction. MAP-Net [46] alleviates the scale problem by capturing spatial localization-preserved multi-scale features through a multi-parallel path design.…”
Section: Cnn-based Methods For Building Extractionmentioning
confidence: 99%
“…In order to leverage large-scale contextual information and extract critical cues for identifying building pixels in the presence of complex background and when there is occlusion, researchers have proposed methods to capture local and longrange spatial dependencies among the ground entities in the aerial scene [55], [56]. Several researchers are also using tranformers [60], attention modules [12], [61]- [63] and multiscale information [8], [43], [45], [46], [64]- [66] for this purpose. Recently, multi-view satellite images [67], [68] are also being used to perform semantic segmentation of points on ground.…”
Section: Related Workmentioning
confidence: 99%
“…The metrics do not enforce the requirement of contiguity of the pixels that belong to the same building [1]- [6]. This has led some researchers to formulate post-processing steps like the Conditional Random Fields (CRFs) [7], [8] during inference for invoking spatial contiguity in the output label maps.…”
Section: Introductionmentioning
confidence: 99%
“…The second is data improvement, which consists of establishing a high-quality and high-precision sample set, increasing sample data in the study field, and realizing sample data improvement by fusing multisource data like DSM [21,22]. The third is classifier synthesis, which introduces the conditional random field and attention mechanism to improve classification accuracy [23,24]. While the encoding and decoding structure in FCN can realize an end-to-end network structure, the original image information lost during the encoding phase is difficult to recover during the decoding phase, resulting in the fuzzy edges of building extraction results, a loss of building details, and building details a reduction in extraction accuracy.…”
Section: Introductionmentioning
confidence: 99%