2018
DOI: 10.1007/978-3-030-00563-4_2
|View full text |Cite
|
Sign up to set email alerts
|

How Good a Shallow Neural Network Is for Solving Non-linear Decision Making Problems

Abstract: The universe approximate theorem states that a shadow neural network (one hidden layer) can represent any non-linear function. In this paper, we aim at examining how good a shadow neural network is for solving non-linear decision making problems. We proposed a performance driven incremental approach to searching the best shadow neural network for decision making, given a data set. The experimental results on the two benchmark data sets, Breast Cancer in Wisconsin and SMS Spams, demonstrate the correction of un… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 17 publications
0
2
0
Order By: Relevance
“…As the experiments in [3], the number of hidden neurons of the SNN is increased from 1 to n, where n is the input number of the problem space.…”
Section: Learnability Experiments 531 Impact Of Hidden Neuron Numbers...mentioning
confidence: 99%
See 1 more Smart Citation
“…As the experiments in [3], the number of hidden neurons of the SNN is increased from 1 to n, where n is the input number of the problem space.…”
Section: Learnability Experiments 531 Impact Of Hidden Neuron Numbers...mentioning
confidence: 99%
“…Hence, a performance-driven backpropagation (PDBP) algorithm and a variant of particle swarm optimization (VPSO) are developed to learn or optimize the weights of an SNN for nonlinear decision making, respectively. Our previous research [3] observed the impact of hidden neuron numbers on the performance of SNNs for different data sets, using an incremental approach to adding a hidden neuron into an SNN gradually. It was shown that when the number of hidden neurons in an SNN for general data sets reaches about half or more of the input number in the problem space, the performance is not improved further or has a tiny change.…”
Section: Introductionmentioning
confidence: 99%