2014
DOI: 10.1007/s00521-014-1549-5
|View full text |Cite
|
Sign up to set email alerts
|

Applying a new localized generalization error model to design neural networks trained with extreme learning machine

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 43 publications
(8 citation statements)
references
References 13 publications
0
8
0
Order By: Relevance
“…This figure, makes it obvious that the proposed model outperforms the Tsang models in most of the classes including Normal, DoS, Probe and R2L with 89.31%, 99.27%, 84.16% and 48.13%, respectively. MOGFIDS provides the highest result in U2R class Figure 4 shows a comparison of the proposed system with those systems proposed in [24], [54] and [55] that have been tested on the KDDTest − 21 in terms of the classification accuracy. Among those systems, the proposed detection model achieved the best classification accuracy of 94.68%.…”
Section: Additional Comparisonmentioning
confidence: 99%
“…This figure, makes it obvious that the proposed model outperforms the Tsang models in most of the classes including Normal, DoS, Probe and R2L with 89.31%, 99.27%, 84.16% and 48.13%, respectively. MOGFIDS provides the highest result in U2R class Figure 4 shows a comparison of the proposed system with those systems proposed in [24], [54] and [55] that have been tested on the KDDTest − 21 in terms of the classification accuracy. Among those systems, the proposed detection model achieved the best classification accuracy of 94.68%.…”
Section: Additional Comparisonmentioning
confidence: 99%
“…Then, the SLFFNN training is changed to a least-square problem. The ELM algorithms implement regularization theory to define a target function as [38][39][40]…”
Section: Extreme Learning Machinementioning
confidence: 99%
“…Then, the SLFFNN training is changed to a least square problem. The ELM algorithms implement regularization theory to define a target function as following [38][39][40]:…”
Section: J=1…n (3)mentioning
confidence: 99%