1994
DOI: 10.21236/ada289352
|View full text |Cite
|
Sign up to set email alerts
|

Reducing Network Depth in the Cascade-Correlation Learning Architecture,

Abstract: The Cascade-Correlation learning algorithm constructs a multi-layer artificial neural network as it learns to perform a given task. The resulting network's size and topology are chosen specifically for this task. In the resulting "cascade" networks, each new hidden unit receives incoming connections from all input and pre-existing hidden units. In effect, each new unit adds a new layer to the network. This allows Cascade-Correlation to create complex feature detectors, but it typically results in a network tha… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
72
0
3

Year Published

2007
2007
2022
2022

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 80 publications
(75 citation statements)
references
References 5 publications
0
72
0
3
Order By: Relevance
“…In cascade correlation neural networks training starts with a simple perceptron network, which is grown incrementally by adding new cascaded layers with skip-level connections as long as performance on a validation dataset improves. Since the proposal of the original cascade correlation algorithm in [7], various improvements that follow a similar overall process to the original method have been proposed, for example in [1,11,20,31], as well as Cascade2 [21]. Active research in this field, however, is fairly limited.…”
Section: Automlmentioning
confidence: 99%
“…In cascade correlation neural networks training starts with a simple perceptron network, which is grown incrementally by adding new cascaded layers with skip-level connections as long as performance on a validation dataset improves. Since the proposal of the original cascade correlation algorithm in [7], various improvements that follow a similar overall process to the original method have been proposed, for example in [1,11,20,31], as well as Cascade2 [21]. Active research in this field, however, is fairly limited.…”
Section: Automlmentioning
confidence: 99%
“…Instead of BP, we use a variant of the cascade correlation (CC) method called sibling-descendant cascade correlation (SDCC) which is a constructive method for learning in multi-layer artificial neural networks (Baluja & Fahlman, 1994). SDCC learns both the network's structure and the connection weights; it starts with a minimal network, then automatically trains new hidden units and adds them to the active network, one at a time.…”
Section: Autonomous Learning Via a Constructive Algorithmmentioning
confidence: 99%
“…In classical CC, each new recruit is installed on its own layer, higher than previous layers. The SDCC variant is more flexible in that a recruit can be installed either on the current highest layer (as a sibling) or on its own higher layer as a descendant, depending on which location yields the higher correlation between candidate unit activation and current network error (Baluja & Fahlman, 1994). In both CC and SDCC, learning progresses in a recurring sequence of two phases -output phase and input phase.…”
Section: Autonomous Learning Via a Constructive Algorithmmentioning
confidence: 99%
“…It is possible that the random functions generated by our teacher networks did not have this characteristic. Baluja and Fahlman (1994) proposed an elegant solution to the problem of having to choose whether or not to cascade weights. In their extension to cascor, called Sibling/Descendant Downloaded by [Bangor University] at 19:25 03 January 2015…”
Section: Why Use Cascading Weights?mentioning
confidence: 99%