2023
DOI: 10.3390/rs15112927
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Class Double-Transformation Network for SAR Image Registration

Abstract: In SAR image registration, most existing methods consider the image registration as a two-classification problem to construct the pair training samples for training the deep model. However, it is difficult to obtain a mass of given matched-points directly from SAR images as the training samples. Based on this, we propose a multi-class double-transformation network for SAR image registration based on Swin-Transformer. Different from existing methods, the proposed method directly considers each key point as an i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 44 publications
0
1
0
Order By: Relevance
“…Fang Shang [24] constructed position vectors and change vectors that cleverly characterize image pixels and classified Polarimetric Synthetic Aperture Radar (PolSAR) images of complex terrain by a Quaternion Neural Network (QNN), which is not influenced by height information. Moreover, advanced techniques integrate self-learning with SIFT feature points for near-subpixel-level registration [7], employ deep forest models to enhance robustness [13], utilize unsupervised learning frameworks for multiscale registration [25][26][27], and leverage Transformer networks for efficient and accurate registration [28][29][30][31][32][33]. Deng, X.…”
Section: Deep Learningmentioning
confidence: 99%
“…Fang Shang [24] constructed position vectors and change vectors that cleverly characterize image pixels and classified Polarimetric Synthetic Aperture Radar (PolSAR) images of complex terrain by a Quaternion Neural Network (QNN), which is not influenced by height information. Moreover, advanced techniques integrate self-learning with SIFT feature points for near-subpixel-level registration [7], employ deep forest models to enhance robustness [13], utilize unsupervised learning frameworks for multiscale registration [25][26][27], and leverage Transformer networks for efficient and accurate registration [28][29][30][31][32][33]. Deng, X.…”
Section: Deep Learningmentioning
confidence: 99%