2020
DOI: 10.1109/access.2020.3015701
|View full text |Cite
|
Sign up to set email alerts
|

ARC-Net: An Efficient Network for Building Extraction From High-Resolution Aerial Images

Abstract: Automatic building extraction based on high-resolution aerial images has important applications in urban planning and environmental management. In recent years advances and performance improvements have been achieved in building extraction through the use of deep learning methods. However, the design of existing models focuses attention to improve accuracy through an overflowing number of parameters and complex structure design, resulting in large computational costs during the learning phase and low inference… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
39
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 54 publications
(39 citation statements)
references
References 62 publications
0
39
0
Order By: Relevance
“…Recently, there are many papers to improve the building extraction performance by focusing on network architecture designing. Structures such as deep and shallow feature fusion [8,38,[46][47][48][49][50][51][52][53][54][55], multiple receptive field [5,12,48,51,[54][55][56][57], residual connection [1,11,47,51,52,[57][58][59] have been widely used in building extraction. MAP-Net [46] alleviates the scale problem by capturing spatial localization-preserved multi-scale features through a multi-parallel path design.…”
Section: Cnn-based Methods For Building Extractionmentioning
confidence: 99%
“…Recently, there are many papers to improve the building extraction performance by focusing on network architecture designing. Structures such as deep and shallow feature fusion [8,38,[46][47][48][49][50][51][52][53][54][55], multiple receptive field [5,12,48,51,[54][55][56][57], residual connection [1,11,47,51,52,[57][58][59] have been widely used in building extraction. MAP-Net [46] alleviates the scale problem by capturing spatial localization-preserved multi-scale features through a multi-parallel path design.…”
Section: Cnn-based Methods For Building Extractionmentioning
confidence: 99%
“…In order to achieve a better balance between accuracy and efficiency, a common approach is to apply an existing lightweight network or adopt a more efficient convolutional module to develop a lightweight network as a feature extraction network [122][123][124]. Lin et al [108] and Liu et al [109] developed new feature extraction backbone networks with deep separable convolutional asymmetric convolution respectively, incorporating decoder networks to achieve segmentation results with accuracy no less than mainstream networks such as U-Net, SegNet, and earlier lightweight networks such as ENet [125], with significantly lower number of parameters and computational effort.…”
Section: Lightweight Network Designmentioning
confidence: 99%
“…In [26], a new FCN structure consisting of a spatial residual convolution module named spatial residual inception (SRI) was proposed for extracting buildings from RSI. In [33], residual network connection was also used for building extraction. In [34], Following the basic architecture of U-net [2], a deep convolutional neural network named DeepResUnet was proposed, which can effectively perform urban building segmentation at pixel scale from RSI and generate accurate segmentation results.…”
Section: Introductionmentioning
confidence: 99%
“…Another way to improve the performance of building extraction is to make full use of the multi-scale features of the pixels. Based on this idea, multi-scale feature extractors were used to the deep neural networks, such as a global multi-scale encoder-decoder network (GMEDN) [28], U-shaped hollow pyramid pooling (USPP) network [29], ARC-Net [33], and ResUnet-a [30]. These network structures contribute to extracted and fused the multiscale feature information of pixels in the decoding module.…”
Section: Introductionmentioning
confidence: 99%