2022
DOI: 10.1109/tgrs.2021.3098752
|View full text |Cite
|
Sign up to set email alerts
|

Dual-Stream Convolutional Neural Network With Residual Information Enhancement for Pansharpening

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 26 publications
(8 citation statements)
references
References 42 publications
0
8
0
Order By: Relevance
“…Next, the PAN image is convolved with the estimated filter, and the extracted detail components are those missing from the low-resolution MS image. Yang et al [ 37 ] proved that the filter estimation has strong robustness through experiments. Even if white noise is added to the initial fusion image, it does not affect the extraction of details.…”
Section: The Proposed Methodsmentioning
confidence: 99%
“…Next, the PAN image is convolved with the estimated filter, and the extracted detail components are those missing from the low-resolution MS image. Yang et al [ 37 ] proved that the filter estimation has strong robustness through experiments. Even if white noise is added to the initial fusion image, it does not affect the extraction of details.…”
Section: The Proposed Methodsmentioning
confidence: 99%
“…This section introduces the convolutional attention mechanism [23] and the dual-stream convolutional neural network [24] used in this paper.…”
Section: Theoretical Basismentioning
confidence: 99%
“…However, simply stacking pre-interpolated MS with Pan as the input of network not only ignores individual features but also raises extra computational burden. Hence, instead of treating the two modalities equally, multi-branch networks apply different sub-networks to separately extract the modality-specific features (Shao and Cai, 2018;Zhang et al, 2019c;Liu et al, 2020b;Chen et al, 2021;Zhang and Ma, 2021;Xing et al, 2020;Yang et al, 2022b).…”
Section: Homogeneous Fusionmentioning
confidence: 99%