2009 IEEE Conference on Computer Vision and Pattern Recognition 2009
DOI: 10.1109/cvpr.2009.5206545
|View full text |Cite
|
Sign up to set email alerts
|

Learning invariant features through topographic filter maps

Abstract: Abstract

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
146
0

Year Published

2011
2011
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 218 publications
(148 citation statements)
references
References 20 publications
2
146
0
Order By: Relevance
“…Several works have shown that applying sparse coding to local parts or descriptors of the images can capture higher-level features compared to raw image patches [38,39]. In our work, we divide the image into square patches of fixed size and extract low-level features from each of the patches.…”
Section: Features Extraction and Representationmentioning
confidence: 99%
“…Several works have shown that applying sparse coding to local parts or descriptors of the images can capture higher-level features compared to raw image patches [38,39]. In our work, we divide the image into square patches of fixed size and extract low-level features from each of the patches.…”
Section: Features Extraction and Representationmentioning
confidence: 99%
“…Figure 1 shows an example map of a 2D topography. In this paper, the function will not be learnt as [7,9,10], and we will keep it fixed for simplicity. Some studies on how to estimate the topography function can be found in [8,11,25].…”
Section: Modelmentioning
confidence: 99%
“…In a recent work, Kavukcuoglu et al have proposed an architecture and a learning algorithm that can learn location-invariant feature descriptors [10]. Those feature descriptors include a bank of filters which are learnt by an improved sparse coding model [3] with SRSS pooling.…”
Section: The Invariance To Translationsmentioning
confidence: 99%
See 1 more Smart Citation
“…The first is the structured dictionary learning, where the interaction between the dictionary atoms is also learned/imposed. Common structures are tree structures [39] and grid structures [46]. The second variation is multiscale dictionary learning, extending the basic dictionary learning scheme to consider different patch sizes [68].…”
Section: Learning Overcomplete Dictionariesmentioning
confidence: 99%