2020
DOI: 10.1080/00207543.2020.1764656
|View full text |Cite
|
Sign up to set email alerts
|

Cascade neural network algorithm with analytical connection weights determination for modelling operations and energy applications

Abstract: The performance and learning speed of the Cascade Correlation neural network (CasCor) may not be optimal because of redundant hidden units' in the cascade architecture and the tuning of connection weights. This study explores the limitations of CasCor and its variants and proposes a novel constructive neural network (CNN). The basic idea is to compute the input connection weights by generating linearly independent hidden units from the orthogonal linear transformation, and the output connection weights by conn… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(2 citation statements)
references
References 40 publications
0
2
0
Order By: Relevance
“…This is probably because of CMLP network contains a direct weighted connection from input to output layers, which allows it to learn higher complex patterns as compared to MLP network. 76,77 These additional direct connections also provide linear connections that CMLP network could represent both linear and nonlinear relationships.…”
Section: Number Of Clustersmentioning
confidence: 99%
“…This is probably because of CMLP network contains a direct weighted connection from input to output layers, which allows it to learn higher complex patterns as compared to MLP network. 76,77 These additional direct connections also provide linear connections that CMLP network could represent both linear and nonlinear relationships.…”
Section: Number Of Clustersmentioning
confidence: 99%
“…Extensive hyperparameter initializations in machine learning methods may require a lot of trial-and-error experimental work to determine the best optimal structure that has the capability of maximum error reduction. Excessive hyperparameter adjustment along with high dimensional data holding redundant information may cause existing algorithms to converge to a suboptimal solution (Wang et al, 2020). In this study, we propose CNN, named as hyperparameter-free cascade principal component least squares neural network (hyp-free CPCLS), having the characteristics of analytically determining the number of hidden units in hidden layers with no iterative tunning of connection weights.…”
Section: Contribution and Noveltymentioning
confidence: 99%