1998
DOI: 10.1088/0954-898x_9_2_005
|View full text |Cite
|
Sign up to set email alerts
|

Development of localized oriented receptive fields by learning a translation-invariant code for natural images

Abstract: Neurons in the mammalian primary visual cortex are known to possess spatially localized, oriented receptive fields. It has previously been suggested that these distinctive properties may reflect an efficient image encoding strategy based on maximizing the sparseness of the distribution of output neuronal activities or alternately, extracting the independent components of natural image ensembles. Here, we show that a strategy for transformation-invariant coding of images based on a first-order Taylor series exp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2000
2000
2021
2021

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(14 citation statements)
references
References 60 publications
(75 reference statements)
0
14
0
Order By: Relevance
“…It generalizes previous approaches based on first-order Taylor series expansions of images (Black & Jepson, 1996;Rao & Ballard, 1998), which can account for only small transformations due to their assumption of a linear generative model for the transformed images. The Lie approach, on the other hand, utilizes a matrix-exponential-based generative model that can handle arbitrarily large transformations once the correct transformation operators have been learned.…”
Section: Introductionmentioning
confidence: 82%
See 2 more Smart Citations
“…It generalizes previous approaches based on first-order Taylor series expansions of images (Black & Jepson, 1996;Rao & Ballard, 1998), which can account for only small transformations due to their assumption of a linear generative model for the transformed images. The Lie approach, on the other hand, utilizes a matrix-exponential-based generative model that can handle arbitrarily large transformations once the correct transformation operators have been learned.…”
Section: Introductionmentioning
confidence: 82%
“…Defining d dz I = G I for some operator matrix G, we can rewrite equation 2.3 as I (z) = e zG I 0 , which is the same as equation 2.2 with I 0 = I (0). Thus, some previous approaches based on first-order Taylor series expansions (Shi & Tomasi, 1994;Black & Jepson, 1996;Rao & Ballard, 1998) can be viewed as special cases of the Lie group-based generative model.…”
Section: Continuous Transformations and Lie Groupsmentioning
confidence: 99%
See 1 more Smart Citation
“…A fundamental problem in vision is to simultaneously recognize objects and their transformations (Anderson & Van Essen, 1987;Olshausen et al, 1995;Rao & Ballard, 1998;Rao & Ruderman, 1999;Tenenbaum & Freeman, 2000). Bilinear generative models provide a tractable way of addressing this problem by factoring an image into object features and transformations using a bilinear function.…”
Section: Discussionmentioning
confidence: 99%
“…As Schweitzer notices [33] the algorithm is likely to get stuck in local minima, since it comes from a linearization and uses gradient descent methods. On the other hand, Rao [32] has proposed a neural-network which can learn a translation-invariant code for natural images. Although he suggests updating the appearance basis, the experiments show only translation-invariant recognition, as proposed by Black and Jepson [4].…”
Section: Adding Motion Into the Subspace Formulationmentioning
confidence: 99%