2022
DOI: 10.1109/jstars.2021.3134676
|View full text |Cite
|
Sign up to set email alerts
|

Feature Matching and Position Matching Between Optical and SAR With Local Deep Feature Descriptor

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
15
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 22 publications
(15 citation statements)
references
References 51 publications
0
15
0
Order By: Relevance
“…To validate the performance of the proposed MSA-Net, the experiments are performed on a public SAR and optical image dataset SEN1-2 36 and five pairs of SAR and optical images. We compare our method with three handcrafted and three deep learning methods, including OS-SIFT, 16 PCSD, 18 RIFT, 19 HardNet, 21 MatchosNet, 26 and CNet 27 . Section 4.1 describes a data description.…”
Section: Experiments and Discussionmentioning
confidence: 99%
See 4 more Smart Citations
“…To validate the performance of the proposed MSA-Net, the experiments are performed on a public SAR and optical image dataset SEN1-2 36 and five pairs of SAR and optical images. We compare our method with three handcrafted and three deep learning methods, including OS-SIFT, 16 PCSD, 18 RIFT, 19 HardNet, 21 MatchosNet, 26 and CNet 27 . Section 4.1 describes a data description.…”
Section: Experiments and Discussionmentioning
confidence: 99%
“…Hughes et al 23 , 24 explained that pseudo-Siamese networks are better suited for multimodal image matching. MatchosNet 26 and CNet, 27 based on pseudo-Siamese network structure, have achieved average results in SAR and optical image registration. MatchosNet is template matching, which can obtain better results only when there is no large translation or rotation transformation between the two images to be registered.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations