2014
DOI: 10.1007/s10489-014-0562-9
|View full text |Cite
|
Sign up to set email alerts
|

Probabilistic neural network training procedure based on Q(0)-learning algorithm in medical data classification

Abstract: In this article, an iterative procedure is proposed for the training process of the probabilistic neural network (PNN). In each stage of this procedure, the Q(0)-learning algorithm is utilized for the adaptation of PNN smoothing parameter (σ). Four classes of PNN models are regarded in this study. In the case of the first, simplest model, the smoothing parameter takes the form of a scalar; for the second model, σ is a vector whose elements are computed with respect to the class index; the third considered mode… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
6
4

Relationship

0
10

Authors

Journals

citations
Cited by 40 publications
(14 citation statements)
references
References 41 publications
0
14
0
Order By: Relevance
“…However, the network parameters of PNN (e.g., connection weights and pattern layer smoothing factors) largely determine the performance of the network, and selecting the most appropriate network parameters based on training data often optimizes the classification performance of PNN [18,19]. Manually adjusting the network parameters is not a good approach: the workload is tedious and it is also difficult to adjust the network parameters to the most suitable values.…”
Section: Introductionmentioning
confidence: 99%
“…However, the network parameters of PNN (e.g., connection weights and pattern layer smoothing factors) largely determine the performance of the network, and selecting the most appropriate network parameters based on training data often optimizes the classification performance of PNN [18,19]. Manually adjusting the network parameters is not a good approach: the workload is tedious and it is also difficult to adjust the network parameters to the most suitable values.…”
Section: Introductionmentioning
confidence: 99%
“…where σ pn 2 is the smoothing parameter, which has to be iteratively estimated on the basis of the classification performance of the PNN. 103…”
Section: Review Of the Current Shm Methodsmentioning
confidence: 99%
“…Within the same layer, a PCA dimensionality reduction stage, VCA feature transformation stage, and L 2,p -RSR feature selection stage, where RSR is regularized self-representation, are included. Each layer of the model can learn the output features of the current layer [7]. The abstract information of the different layers is different.…”
Section: Characteristic Learningmentioning
confidence: 99%