2014 IEEE Symposium on Computational Intelligence and Data Mining (CIDM) 2014
DOI: 10.1109/cidm.2014.7008147
|View full text |Cite
|
Sign up to set email alerts
|

Batch linear least squares-based learning algorithm for MLMVN with soft margins

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
19
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
4
3
1

Relationship

4
4

Authors

Journals

citations
Cited by 11 publications
(19 citation statements)
references
References 20 publications
0
19
0
Order By: Relevance
“…The second advantage is derivative free learning, which does not suffer from the local minima phenomenon [29]. The third advantage is the ability of MLMVN to employ a batch learning algorithm [30,34], which adjusts the weights not moving from one learning sample to another, but for the entire learning set after the errors were calculated for all learning samples. Specifically, to improve generalization capability when solving classification problems, a soft margin technique was introduced for MLMVN in [28].…”
Section: Multilayer Neural Network With Multi-valued Neurons (Mlmvn) mentioning
confidence: 99%
See 1 more Smart Citation
“…The second advantage is derivative free learning, which does not suffer from the local minima phenomenon [29]. The third advantage is the ability of MLMVN to employ a batch learning algorithm [30,34], which adjusts the weights not moving from one learning sample to another, but for the entire learning set after the errors were calculated for all learning samples. Specifically, to improve generalization capability when solving classification problems, a soft margin technique was introduced for MLMVN in [28].…”
Section: Multilayer Neural Network With Multi-valued Neurons (Mlmvn) mentioning
confidence: 99%
“…Then, some modifications were made in this algorithm and it was generalized for any number of output neurons and hidden layers. Moreover, soft margins introduced for MLMVN in [28] were incorporated in the batch algorithm [34]. We used this algorithm exactly as it was presented in [34].…”
Section: Multilayer Neural Network With Multi-valued Neurons (Mlmvn) mentioning
confidence: 99%
“…The used intelligent classifier is a complex neural network, that presents excellent performances compared to other machinelearning techniques. It is based on a feedforward multilayer neural network with multi-valued neurons (MLMVN), characterized by a derivative free learning algorithm [25], an alternative algorithm based on the linear least square (LLS) methods [26] to reduce the high computational cost of the original backpropagation procedure, a soft margin method [27,28] in order to make the network a good classifier. Further details will be given later in the application description.…”
Section: Neural Network Trainingmentioning
confidence: 99%
“…This learning rule allows the correction of the weights for each sample of the dataset s (s = 1, • • • , N s ). As shown in [37,38] the correction of the weights can be obtained through a derivative free learning rule and this is one of the most important advantages of using a complex neural network over other classifiers. This procedure can be applied step by step for each layer and each sample or through an algorithm based on the linear least square (LLS) method reducing the computational cost [39].…”
Section: Complex Neural Networkmentioning
confidence: 99%
“…where ∆W k,m i is the correction for the i-th weight of the k-th neuron belonging to the layer m, α k,m is the corresponding learning rate, n m−1 is the number of the inputs equal to the number of the outputs of the previous layer, z s k,m is the magnitude of the weighted sum, δ s k,m is the output error obtained through the backpropagation method and Y s i,m−1 is the conjugate-transposed of the input. In this way, it is possible to organize a very efficient batch learning algorithm based on the LLS method [37]. When using this algorithm, the output error is calculated for each neuron and each sample and saved in a specific matrix at the end of every training epoch.…”
Section: Complex Neural Networkmentioning
confidence: 99%