2016
DOI: 10.1109/tnnls.2015.2431251
|View full text |Cite
|
Sign up to set email alerts
|

MLPNN Training via a Multiobjective Optimization of Training Error and Stochastic Sensitivity

Abstract: The training of a multilayer perceptron neural network (MLPNN) concerns the selection of its architecture and the connection weights via the minimization of both the training error and a penalty term. Different penalty terms have been proposed to control the smoothness of the MLPNN for better generalization capability. However, controlling its smoothness using, for instance, the norm of weights or the Vapnik-Chervonenkis dimension cannot distinguish individual MLPNNs with the same number of free parameters or … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
28
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
3
1
1

Relationship

1
9

Authors

Journals

citations
Cited by 64 publications
(28 citation statements)
references
References 37 publications
0
28
0
Order By: Relevance
“…Optimizing MLP for learning is based on selecting the suitable architecture and the connection weights via the minimizing training error and a penalty term [13]. Many techniques have been proposed to extend MLP abilities by additional components.…”
Section: Related Workmentioning
confidence: 99%
“…Optimizing MLP for learning is based on selecting the suitable architecture and the connection weights via the minimizing training error and a penalty term [13]. Many techniques have been proposed to extend MLP abilities by additional components.…”
Section: Related Workmentioning
confidence: 99%
“…Therefore, we use a training method based on LGE to train the AE, referred to as LiSSA [45]. The goal of the LGE-based training method is to achieve a low generalization error for future unseen samples [46]. For machine-learning tasks, those unseen samples are usually near the training samples and do not exceed a distance Q, otherwise this training sample is not representative of the given problem.…”
Section: Localized Stochastic-sensitive Autoencoder (Lissa)mentioning
confidence: 99%
“…In [41] another Q-neighborhood based sensitivity was introduced to assess sensitivity for individual instances of the imbalanced classification problem. Recently, a stochastic sensitivity [42] based on L-GEM was introduced to provide a straightforward measure on an MLP's output fluctuations.…”
Section: A the Stochastic Sensitivity Analysis Of Neural Networkmentioning
confidence: 99%