2008 7th IEEE International Conference on Development and Learning 2008
DOI: 10.1109/devlrn.2008.4640806
|View full text |Cite
|
Sign up to set email alerts
|

Where-what network 1: “Where” and “what” assist each other through top-down connections

Abstract: This paper describes the design of a single learning network that integrates both object location ("where") and object type ("what"), from images of learned objects in natural complex backgrounds. The in-place learning algorithm is used to develop the internal representation (including synaptic bottomup and top-down weights of every neuron) in the network, such that every neuron is responsible for the learning of its own signal processing characteristics within its connected network environment, through intera… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
24
0

Year Published

2009
2009
2016
2016

Publication Types

Select...
6
2
1

Relationship

5
4

Authors

Journals

citations
Cited by 35 publications
(24 citation statements)
references
References 13 publications
0
24
0
Order By: Relevance
“…The DN has had several versions of experimental embodiments, called Where-What Networks (WWNs), from WWN-1 [14] to WWN-7 [43]. Each WWN has multiple areas in the Z areas, representing the location concept (Location Motor, LM), type concept (Type Motor, TM), or scale concept (Scale Motor, SM), and so on.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The DN has had several versions of experimental embodiments, called Where-What Networks (WWNs), from WWN-1 [14] to WWN-7 [43]. Each WWN has multiple areas in the Z areas, representing the location concept (Location Motor, LM), type concept (Type Motor, TM), or scale concept (Scale Motor, SM), and so on.…”
Section: Resultsmentioning
confidence: 99%
“…In addition to the similarity to Hopfield Network [13] and LSTM [11] cited therein, Graves et al 2014 [9] appeared to have a series of mechanism similarities with DN 2011 [30] along with DN embodiments Where-What Networks, WWN-1 2008 [14] to WWN-7 2013 [43]. To facilitate understanding their conceptual relation, let us see some of the similarities with correspondence of concepts: (1) The finite state machine (i.e.…”
Section: Relevant Studies and Conceptsmentioning
confidence: 99%
“…Where-What Networks (WWNs) [8] first introduced in 2008 is a biologically plausible developmental model designed to integrate the object recognition and attention namely, "what" and "where" information in the ventral stream and dorsal stream respectively. Therefore, multiple concepts (e.g., type, location, scale) can be learned concurrently in the network.…”
Section: Introductionmentioning
confidence: 99%
“…So far, WWN has six versions. WWN-1 [11] can realize object recognition in complex backgrounds performing in two different selective attention modes: the top-down positionbased mode finds a particular object given the location information; the top-down object-based mode finds the location of the object given the type. But only 5 locations were tested.…”
Section: Introductionmentioning
confidence: 99%