2016
DOI: 10.1007/978-3-319-46466-4_28
|View full text |Cite
|
Sign up to set email alerts
|

LIFT: Learned Invariant Feature Transform

Abstract: We introduce a novel Deep Network architecture that implements the full feature point handling pipeline, that is, detection, orientation estimation, and feature description. While previous works have successfully tackled each one of these problems individually, we show how to learn to do all three in a unified manner while preserving end-toend differentiability. We then demonstrate that our Deep pipeline outperforms state-of-the-art methods on a number of benchmark datasets, without the need of retraining.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

5
645
0
2

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 909 publications
(652 citation statements)
references
References 45 publications
5
645
0
2
Order By: Relevance
“…Some of these techniques have extracted immediate activations as the descriptor [16,14,9,33], which have shown to be effective for patch-level matching. Other methods have directly learned a similarity measure for comparing patches using a convolutional similarity network [19,51,41,50]. Even though CNN-based descriptors encode a discriminative structure with a deep architecture, they have inherent limitations in handling large intra-class variations [41,10].…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Some of these techniques have extracted immediate activations as the descriptor [16,14,9,33], which have shown to be effective for patch-level matching. Other methods have directly learned a similarity measure for comparing patches using a convolutional similarity network [19,51,41,50]. Even though CNN-based descriptors encode a discriminative structure with a deep architecture, they have inherent limitations in handling large intra-class variations [41,10].…”
Section: Related Workmentioning
confidence: 99%
“…Instead, we compute the similarity of sampled patch pairs through CNNs. With l omitted for simplicity, the self-similarity between a patch pair P i−s and P i−t is formulated through a Siamese network, followed by decision or metric network [51,19] or a simple L 2 distance [41,50] as shown in Fig. 2(a).…”
Section: Css: Convolutional Self-similarity Layermentioning
confidence: 99%
See 2 more Smart Citations
“…In [13], Dual tree complex wavelet transform is used in a content-based image retrieval application. In [14], the authors used a deep network architecture to build a linear invariant feature transform (LIFT). In [15], Mark Brow et al presented a generalized framework for salience feature detection using histogram.…”
Section: Copyright © 2017 Mecsmentioning
confidence: 99%