2001
DOI: 10.1142/s1469026801000081
|View full text |Cite
|
Sign up to set email alerts
|

Feedback Self-Organizing Map and Its Application to Spatio-Temporal Pattern Classification

Abstract: In this paper, a feedback self-organizing map (FSOM), which is an extension of the selforganizing map (SOM) by employing feedback loops, is proposed. The SOM consists of an input layer and a competitive layer, and the input vectors applied to the input layer are mapped to the competitive layer keeping their spatial features. In order to embed the temporal information to the SOM, feedback loops from the competitive layer to the input layer are employed. The winner unit in the competitive layer is not assigned b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
14
0

Year Published

2004
2004
2016
2016

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 29 publications
(14 citation statements)
references
References 9 publications
0
14
0
Order By: Relevance
“…And, in order to represent secondary candidates and a quantitative measure of confidence of applied input signals, another output layer is provided next to the competitive layer. Each of these structures has been proposed independently as a feedback SOM (FSOM) [4] and an MCP network [10], so we construct a new type of network taking into account their advantages. The proposed model consists of four layers, an input layer x, a competitive layer y, a state layer h, and an output layer z.…”
Section: An Elman-type Feedback Sommentioning
confidence: 99%
See 1 more Smart Citation
“…And, in order to represent secondary candidates and a quantitative measure of confidence of applied input signals, another output layer is provided next to the competitive layer. Each of these structures has been proposed independently as a feedback SOM (FSOM) [4] and an MCP network [10], so we construct a new type of network taking into account their advantages. The proposed model consists of four layers, an input layer x, a competitive layer y, a state layer h, and an output layer z.…”
Section: An Elman-type Feedback Sommentioning
confidence: 99%
“…Another method is to replace conventional neurons in the standard SOM with dynamic elements [2,3]. Making use of feedback pathways to return the output signal to the input nodes as a part of the net input signal at the next time frame [4] is also effective. A survey of these approaches makes us realize that it resembles the stream developing from multilayer neural networks, as for the SOMs which do not possess dynamics intrinsically, to recurrent networks including time-delay neural networks (TDNNs) [5], finite impulse response (FIR) networks [6], Kleinfeld's model [7], Jordan's model [8], and Elman's model [9].…”
Section: Introductionmentioning
confidence: 99%
“…Although it was proposed as one of neural network models for the biological visual information processing system originally, a topology-preserving projection from the input layer (the multi-dimensional space) to the competitive layer (the two-dimensional plane) developed through training is quite attractive. From the viewpoint of fundamental studies, several new architectures have been proposed (Yamakawa and Horio 1999;Horio and Yamakawa 2001;Tokunaga et al 2003), and they are applied to various kinds of practical tasks for associative memory, systems control, temporal signal processing, pattern generation, and so on (Sakurai et al 2002;Sudo et al 2007;Ishiguma and Wakuya 2007;Wakuya et al 2012;Yamaguchi 2013;Miyakoda and Inoue 2014).…”
Section: Introductionmentioning
confidence: 99%
“…Modified versions of SOM that have enjoyed a great deal of interest equip SOM with additional feed-back connections that allow for natural processing of recursive data types. Typical examples of such models are Temporal Kohonen Map [5], recurrent SOM [6], feedback SOM [7], recursive SOM [1], merge SOM [8] and SOM for structured data [9]. However, the representational capabilities and internal representations of the models are not well understood [3,10,11].…”
Section: Introductionmentioning
confidence: 99%