A novel method called global and local transformation consistency constraints, which combines the scale, orientation and spatial layout information of 'scale invariant feature transform' (SIFT) features, is proposed for discarding mismatches from given putative point correspondences. Experiments show that the proposed method can efficiently extract high-precision matches from low-precision putative SIFT matches for wide baseline image pairs, and outperforms or performs close to state-of-the-art approaches.Introduction: Local features are powerful tools for finding correspondences between wide baseline views of the same scenes. Some feature-based algorithms first establish putative correspondences, and then estimate the best global geometry relationship (such as homography) interpreting them. However, many well-known robust estimators (such as RANSAC [1]) perform poorly when the ratio of inliers is lower than 50% [2], while discarding mismatches before estimating this relationship yields important improvements, especially in the case where incorrect matches strongly outnumber the correct ones. Previous works (see e.g. [3,4]) for discarding mismatches mainly employ the geometrical and topological relationship among putative matches, but ignore the scale and orientation information of the potential feature pairs, which can express a similarity transformation.This Letter focuses on rejecting mismatches via evaluating the quality of each potential correspondence, which is measured by both global and local transformation consistency. To address the mismatches discarding problem, we divide the algorithm into two steps. First, using global constraint, we will be retaining a part of the matches of which the scale log-ratio and orientation difference are approximate to global scaling and rotation factor, respectively. Then, using local constraint, we will reject more incorrect matches from the first step, with a stricter constraint by requiring that neighbouring feature pairs have the similar transformation. Experiments show that the approach presented in this Letter improves the currently achieved wide baseline matching precision, with 10% fewer errors on most of the six well-known wide baseline image pairs, which were offered by Tuytellaars and Van Gool [5].
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.