2022
DOI: 10.1109/tgrs.2022.3226778
|View full text |Cite
|
Sign up to set email alerts
|

Joint Variation Learning of Fusion and Difference Features for Change Detection in Remote Sensing Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
6
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(7 citation statements)
references
References 48 publications
1
6
0
Order By: Relevance
“…These proposed new architectures are in line with recent developments for 2D image change detection concerning the importance of data fusion [25], [29], [30] for change detection task. Indeed, by convolving change features in the encoder, we expect that the network will put more attention on changes and also better combine multi-scale change features.…”
Section: B New Models For 3d Point Clouds Change Detectionsupporting
confidence: 68%
See 1 more Smart Citation
“…These proposed new architectures are in line with recent developments for 2D image change detection concerning the importance of data fusion [25], [29], [30] for change detection task. Indeed, by convolving change features in the encoder, we expect that the network will put more attention on changes and also better combine multi-scale change features.…”
Section: B New Models For 3d Point Clouds Change Detectionsupporting
confidence: 68%
“…For example, [28] propose a network based on the three results of addition, subtraction, and concatenation of features at multiple scales. The study in [29] takes a step aside from the traditional Siamese network with one input for each branch, by proposing to take in one branch the concatenation of the images, and in the other the difference, forming two sub-networks with different properties. At the output of each layer, the features of the two branches are summed and then concatenated at the corresponding scale in the decoder thanks to skip connections.…”
mentioning
confidence: 99%
“…Generally, the changed objects in bi-temporal images often have significant size variations [24], which leads to incomplete detection targets and unsmooth boundaries in the results. Consequently, it is imperative to explore multi-scale feature fusion strategies to smooth the boundaries and improve the accuracy of results.…”
Section: Multi-scale Gated Fusion Modulementioning
confidence: 99%
“…Although the methods mentioned above have achieved performance results, they do not consider the characteristics of bi-temporal multi-scale features, thereby resulting in incomplete detection targets and limited accuracy of results. Inspired by the widely used multi-scale pyramid architecture for extracting multi-scale feature information in medical image segmentation [21], several methods have been proposed to address these problems by using multi-scale features [22][23][24]. For instance, Li et al [23] proposed a multi-scale convolutional channel attention mechanism to generate detailed local features and integral global features.…”
Section: Introductionmentioning
confidence: 99%
“…The direct classification approach aims to identify both the region and category of changes using an end-to-end neural network architecture. Two principal types of such end-to-end change detection networks exist: early fusion and late fusion [24]. Early fusion architectures integrate bi-temporal images as multi-channel inputs, tailored to fit semantic segmentation networks.…”
Section: A Deep Learning-based Change Detectionmentioning
confidence: 99%