2019
DOI: 10.1109/tpami.2019.2915233
|View full text |Cite
|
Sign up to set email alerts
|

HPatches: A benchmark and evaluation of handcrafted and learned local descriptors

Abstract: In this paper, we propose a novel benchmark for evaluating local image descriptors. We demonstrate that the existing datasets and evaluation protocols do not specify unambiguously all aspects of evaluation, leading to ambiguities and inconsistencies in results reported in the literature. Furthermore, these datasets are nearly saturated due to the recent improvements in local descriptors obtained by learning them from large annotated datasets. Therefore, we introduce a new large dataset suitable for training an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
118
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
5
1
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 51 publications
(119 citation statements)
references
References 87 publications
(84 reference statements)
1
118
0
Order By: Relevance
“…SuperPoint [20] introduced a self-supervised framework for training interest point de-tectors and descriptors. It rises to state-of-the-art homography estimation results on HPatches [11] when compared to SIFT, LIFT and Oriented Fast and Rotated Brief (ORB) [41]. The training procedure is, however, complicated and their self-supervision implies that the network can only find corner points.…”
Section: Related Workmentioning
confidence: 88%
“…SuperPoint [20] introduced a self-supervised framework for training interest point de-tectors and descriptors. It rises to state-of-the-art homography estimation results on HPatches [11] when compared to SIFT, LIFT and Oriented Fast and Rotated Brief (ORB) [41]. The training procedure is, however, complicated and their self-supervision implies that the network can only find corner points.…”
Section: Related Workmentioning
confidence: 88%
“…Many alternative feature descriptors have also been proposed, such as SURF (Bay et al, 2006), BRIEF , ORB (Rublee et al, 2011), GLOH , DAISY (Tola et al, 2008), and BINK (Saleiro et al, 2017). However, experiments comparing the performance of different image descriptors for finding matching locations between images of the same scene suggest that SIFT remains one of the most accurate methods (Balntas et al, 2017b(Balntas et al, , 2018Khan et al, 2015;Mukherjee et al, 2015;Tareen and Saleem, 2018;Wu et al, 2013a). It is also possible to learn image descriptors, and this approach can improve performance beyond that of hand-crafted descriptors (Brown et al, 2011;Schönberger et al, 2017;Simonyan et al, 2014;Trzcinski et al, 2012).…”
Section: Related Workmentioning
confidence: 99%
“…We draw inspiration from computer vision, where comparing local image descriptors is the cornerstone of many tasks, such as stereo reconstruction or image retrieval (Szeliski, 2010). There, carefully handcrafted descriptors such as SIFT (Lowe, 1999) have been recently matched in performance by descriptors learned from raw data (Schönberger et al, 2017;Balntas et al, 2017).…”
Section: Learning Pocket Descriptorsmentioning
confidence: 99%
“…Triplets are formed by selecting a positive and a negative partner for a chosen anchor (Wang et al, 2014;Hoffer and Ailon, 2015), which is problematic in a pocket matching scenario, as the ground truth relationship between most pocket pairs is unknown: in fact, only 3,991 out of 505,116 positive pairs in TOUGH-M1 can be used for constructing such triplets. Therefore, we build on the pairwise setup following Simo-Serra et al (2015), which has shown success in computer vision tasks (Balntas et al, 2017). Specifically, given a pair of pockets Q = {(f 1 , µ 1 ), (f 2 , µ 2 )} and orientations φ 1 , φ 2 , we minimize the following contrastive loss function (Hadsell et al, 2006) for a pair of pocket representations p 1 = p(f 1 , µ 1 , φ 1 ) and p 2 = p(f 2 , µ 2 , φ 2 ):…”
Section: Learning Pocket Descriptorsmentioning
confidence: 99%
See 1 more Smart Citation