2021
DOI: 10.1109/jstars.2021.3069222
|View full text |Cite
|
Sign up to set email alerts
|

Lattice-Point Mutually Guided Ground-to-Aerial Feature Matching for Urban Scene Images

Abstract: Ground-to-aerial feature matching bridges information from cross-view images, which enables optimized urban applications, e.g., pixel-level geolocating and complete urban 3D reconstruction. However, urban ground and aerial images typically suffer from drastic changes in viewpoint, scale, and illumination, together with repetitive patterns. Thus, direct matching of local features between ground and aerial images is particularly difficult because of the low similarity of local descriptors and high ambiguity in t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(2 citation statements)
references
References 51 publications
0
2
0
Order By: Relevance
“…Wu et al [9] corrected the images by projecting them onto virtual planes generated from dense point cloud data. Zheng et al [10] first extracted the building façade structures in the aerial and ground images using the local consistency of features, then verifies images based on the transform-invariant low-rank texture [25], and finally achieves aerial-ground image matching by a mutually supervised manner between extracted façade grid structures and matched seeds. However, the viewpoint rectification approach suffers from the following two problems: 1) even if the viewpoint of both images is successfully verified, the information difference between the aerial and ground images is not handled, and the further scale change brought by the viewpoint rectification will exacerbate the difficulty of feature matching and 2) real scenes are often complex, and it is difficult to guarantee the extraction of planar structures, especially when the building facades are nonplanar.…”
Section: B Viewpoint Rectificationmentioning
confidence: 99%
See 1 more Smart Citation
“…Wu et al [9] corrected the images by projecting them onto virtual planes generated from dense point cloud data. Zheng et al [10] first extracted the building façade structures in the aerial and ground images using the local consistency of features, then verifies images based on the transform-invariant low-rank texture [25], and finally achieves aerial-ground image matching by a mutually supervised manner between extracted façade grid structures and matched seeds. However, the viewpoint rectification approach suffers from the following two problems: 1) even if the viewpoint of both images is successfully verified, the information difference between the aerial and ground images is not handled, and the further scale change brought by the viewpoint rectification will exacerbate the difficulty of feature matching and 2) real scenes are often complex, and it is difficult to guarantee the extraction of planar structures, especially when the building facades are nonplanar.…”
Section: B Viewpoint Rectificationmentioning
confidence: 99%
“…In order to address the problem of 2-D/3-D feature-based registration difficulties between aerial and ground images or models, some researchers have conducted pioneering studies using viewpoint rectification [9], [10] or rendering/synthesis [1], [2], [11] to improve feature matching performance and achieve registration of aerial and ground models. Viewpoint rectification works focus on identifying view-independent planar structures (such as ground and building facades) in the scene to correct the aerial and ground images to a normalized viewpoint, thus reducing the differences in viewpoint between the aerial and ground This work is licensed under a Creative Commons Attribution 4.0 License.…”
Section: Introductionmentioning
confidence: 99%