2020
DOI: 10.1016/j.neunet.2020.04.001
|View full text |Cite
|
Sign up to set email alerts
|

A tight upper bound on the generalization error of feedforward neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 12 publications
0
6
0
Order By: Relevance
“…BPNN [31], DNN [32], DBM and DBM-GA. The proposed method is tested against various performance metrics like accuracy, specificity, sensitivity, F-measure, execution time and mean error rate.…”
Section: Resultsmentioning
confidence: 99%
“…BPNN [31], DNN [32], DBM and DBM-GA. The proposed method is tested against various performance metrics like accuracy, specificity, sensitivity, F-measure, execution time and mean error rate.…”
Section: Resultsmentioning
confidence: 99%
“…It can be seen from Figure 4 that the proposed RBFNN-GP has a good prediction performance for the permeability of the MBR, and the prediction error within the range [−4, 3]. Moreover, Table 3 shows the comparison results of SASOA-FNN [12], SOFNN-HPS [17], SAS-RBFNN [18], AGMOPSO [21], ASOL-SORBFNN [23] and Fixed-RBFNN in predicting the permeability of MBR. It can be seen from Table 3 that, under the same iteration times, the learning time of SASOA-FNN [12] and RBFNN-GP is almost the same.…”
Section: Permeability Prediction Of Membrane Bio-reactormentioning
confidence: 96%
“…It is clear that the proposed RBFNN-GP can approximate the Mexican straw hat function with small predicting errors. In order to further prove the excellent generalization ability of the proposed method, the prediction results of the RBFNN-GP are compared with those of the other dynamic neural networks based on structural adjustment, such as SASOA-FNN [12], SOFNN-HPS [17], SAS-RBFNN [18], AG-MOPSO [21], ASOL-SORBFNN [23] and the RBFNN with a fixed structure (fixed-RBFNN). In order to make the comparison more meaningful, all algorithms in this experiment use the same data set, including training samples and test samples, and ensure that the initial number of neurons is the same.…”
Section: Benchmark Example Amentioning
confidence: 99%
See 2 more Smart Citations