2021
DOI: 10.1109/jstars.2021.3102137
|View full text |Cite
|
Sign up to set email alerts
|

DSPCANet: Dual-Channel Scale-Aware Segmentation Network With Position and Channel Attentions for High-Resolution Aerial Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(10 citation statements)
references
References 57 publications
0
10
0
Order By: Relevance
“…As can be seen from Table 9, the results of our proposed model MQANet show better results on both datasets. On the Vaihingen dataset, our MQANet network has higher Mean F1 and OA than other networks, and on the Potsdam dataset, our MQANet network has higher Mean F1 than other networks, and only OA is 0.08% lower than DSPCANet [15]. Mean F1 score for classification is calculated as the harmonic mean of precision and recall [22], and OA is the ratio of the number of correct pixels to the total number of pixels.…”
Section: Discussion Of Overall Experimental Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…As can be seen from Table 9, the results of our proposed model MQANet show better results on both datasets. On the Vaihingen dataset, our MQANet network has higher Mean F1 and OA than other networks, and on the Potsdam dataset, our MQANet network has higher Mean F1 than other networks, and only OA is 0.08% lower than DSPCANet [15]. Mean F1 score for classification is calculated as the harmonic mean of precision and recall [22], and OA is the ratio of the number of correct pixels to the total number of pixels.…”
Section: Discussion Of Overall Experimental Resultsmentioning
confidence: 99%
“…OA = TP + TN TP + TN + FP + FN (15) with the following terms: True Positive example (TP), False Positive Example (FP), True Negative example (TN), False Negative example (FN).…”
Section: Evaluation Metricsmentioning
confidence: 99%
See 1 more Smart Citation
“…Many parallel atrous convolutions layers with different rates are fused to take the contextual information at various scales. Atrous convolution expands the field of view to capture multi-scale features and also generates more parameters (Li et al, 2021). In the proposed algorithm, ASPP operates as a bridge between encoder and decoder in the both side of networks, as shown in Figure 1.…”
Section: Atrous Spatial Pyramidal Poolingmentioning
confidence: 99%
“…Then, the FPN is reconstructed in a bottom-top format to reduce the span between high-level feature maps and low-level ones, which can enrich detail information of high-level feature maps and avoid loss of semantic information from channel reduction. In addition, a channel attention module (CAM) [37] is introduced to reconstruct the FPN to connect adjacent feature layers while generating salient features. The CAM uses pooling operations (Max-Pool and AvgPool) to generate channel context descriptors and then outputs channel attention feature maps through a shared network, which consists of a multilayer perceptron (MLP).…”
Section: Ship Detectormentioning
confidence: 99%