1990
DOI: 10.1109/72.80209
|View full text |Cite
|
Sign up to set email alerts
|

Self-organizing network for optimum supervised learning

Abstract: A new algorithm called the self-organizing neural network (SONN) is introduced. Its use is demonstrated in a system identification task. The algorithm constructs a network, chooses the node functions, and adjusts the weights. It is compared to the backpropagation algorithm in the identification of the chaotic time series. The results show that SONN constructs a simpler, more accurate model, requiring less training data and fewer epochs. The algorithm can also be applied as a classifier.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
24
0

Year Published

1992
1992
2011
2011

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 86 publications
(27 citation statements)
references
References 21 publications
1
24
0
Order By: Relevance
“…Since these specification terms also depend on the particular coding scheme (or the prior bias in Bayesian terminology), we choose to ignore these terms in the optimization or search and propose that the complexity be simply proportional to the (3/2) log N precision terms. This penalty scheme has been widely used in practice by other authors (Rissanen and Wax 1988;Tenorio and Lee 1990, et al). Hence, for rule set R the complexity is assessed as As we shall discuss later in the section on empirical results, this simple pruning algorithm is very effective at discovering parsimonious rule sets that account for the data.…”
Section: Rule Pruning Using a Minimum Description Length Model ~mentioning
confidence: 99%
“…Since these specification terms also depend on the particular coding scheme (or the prior bias in Bayesian terminology), we choose to ignore these terms in the optimization or search and propose that the complexity be simply proportional to the (3/2) log N precision terms. This penalty scheme has been widely used in practice by other authors (Rissanen and Wax 1988;Tenorio and Lee 1990, et al). Hence, for rule set R the complexity is assessed as As we shall discuss later in the section on empirical results, this simple pruning algorithm is very effective at discovering parsimonious rule sets that account for the data.…”
Section: Rule Pruning Using a Minimum Description Length Model ~mentioning
confidence: 99%
“…Several constructive and growing algorithms have been proposed to address this issue, since they are capable of searching the nondifferentiable landscape of subsets of polynomial features. Some add units by multiplying extant ones with new inputs [13], and many others focus on the construction of multilayer FLNs [14]. A recent constructive method [15] performs a Boolean approximation of the data to select the relevant polynomials, followed by a final pruning phase.…”
Section: Efln In Contextmentioning
confidence: 99%
“…Sometimes, "pruning" is also performed on nodes in the input and output layers in order to determine the most important set of variables in the representation scheme. The former approach is represented in the Dynamic Node Creation scheme [18], the Cascade Correlation Learning Architecture [13], and the Self-Organising Neural Network [14], and the latter in Skeletonization [19], and Kamin's pruning scheme [20].…”
Section: Higher Order Learning and Adaptive Architecture Determinationmentioning
confidence: 99%