2007
DOI: 10.1140/epjst/e2007-00061-7
|View full text |Cite
|
Sign up to set email alerts
|

Dynamical neural networks: Modeling low-level vision at short latencies

Abstract: Our goal is to understand the dynamics of neural computations in low-level vision. We study how the substrate of this system, that is local biochemical neural processes, could combine to give rise to an efficient and global perception. We will study these neural computations at different scales from the single-cell to the whole visual system to infer generic aspects of the underlying neural code which may help to understand this cognitive ability. In fact, the architecture of cortical areas, such as the Primar… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2007
2007
2010
2010

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 141 publications
(147 reference statements)
0
7
0
Order By: Relevance
“…This also suggests the extension to representations with some built-in invariances, such as translation and scaling. A gaussian pyramid, for instance, provides a multiscale representation where the set of learned filters would become a dictionary of mother wavelets (Perrinet, 2007). Such an extension leads to a fundamental question: How does representation efficiency evolve with the number M of elements in the dictionary, that is, with the complexity of the representation?…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…This also suggests the extension to representations with some built-in invariances, such as translation and scaling. A gaussian pyramid, for instance, provides a multiscale representation where the set of learned filters would become a dictionary of mother wavelets (Perrinet, 2007). Such an extension leads to a fundamental question: How does representation efficiency evolve with the number M of elements in the dictionary, that is, with the complexity of the representation?…”
Section: Discussionmentioning
confidence: 99%
“…An effective decoding algorithm is to estimate the analog values of the sparse vector (and thus reconstruct the signal) from the order of neurons' activation in the sparse vector (Perrinet, 2007). In fact, knowing the address of the fiber i 0 corresponding to the maximal value, we may infer that it has been produced by an analog value on the emitter side in the highest quantile of the probability distribution function of a i 0 .…”
Section: Role Of Homeostasis In Representation Efficiencymentioning
confidence: 99%
“…In fact, we restricted us ourselves here to static flashed images, but is easily extendable to causal filters (see Ch. 3.4.1 in (Perrinet, 2007)). It however raises the unsolved problem of a dynamical compromise between precision and rapidity of the code which is still unanswered.…”
Section: Sparse Spike Codingmentioning
confidence: 96%
“…(Right) The resulting COMP solution gives a similar result than MP in terms of residual energy as a function of pure L 0 sparseness (see inset). In fact, in MP, by taking the maximum absolute, and since the decrease of energy is proportional to the square of the coefficient (see Chapter 3.1.2 of (Perrinet, 2007)) one ensures that the decrease of MSE per coefficient is optimal for MP. These are both better for that purpose than conjugate gradient.…”
Section: Sparse Spike Codingmentioning
confidence: 99%
See 1 more Smart Citation