2001
DOI: 10.1109/72.963764
|View full text |Cite
|
Sign up to set email alerts
|

Confidence estimation methods for neural networks: a practical comparison

Abstract: Feedforward neural networks, particularly multilayer perceptrons, are widely used in regression and classification tasks. A reliable and practical measure of prediction confidence is essential. In this work three alternative approaches to prediction confidence estimation are presented and compared. The three methods are the maximum likelihood, approximate Bayesian, and the bootstrap technique. We consider prediction uncertainty owing to both data noise and model parameter misspecification. The methods are test… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
97
0
2

Year Published

2003
2003
2021
2021

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 173 publications
(100 citation statements)
references
References 16 publications
1
97
0
2
Order By: Relevance
“…The neural network introduces uncertainty due to model misspecification and the inefficiencies of the training method (Papadopoulos and Edwards 2001). A network trained on a given dataset forms a better representation of the data in regions of high input data density (Papadopoulos and Edwards 2001). Moreover, because of the nature of the training algorithm, there is no guarantee that the weight values correspond to the global minimum of the error function.…”
Section: Evaluation Of Neural Network Robustness Against Uncertaintymentioning
confidence: 99%
See 3 more Smart Citations
“…The neural network introduces uncertainty due to model misspecification and the inefficiencies of the training method (Papadopoulos and Edwards 2001). A network trained on a given dataset forms a better representation of the data in regions of high input data density (Papadopoulos and Edwards 2001). Moreover, because of the nature of the training algorithm, there is no guarantee that the weight values correspond to the global minimum of the error function.…”
Section: Evaluation Of Neural Network Robustness Against Uncertaintymentioning
confidence: 99%
“…Moreover, because of the nature of the training algorithm, there is no guarantee that the weight values correspond to the global minimum of the error function. Even if the global minimum is found, the solution will not necessarily be optimal because the finite training set does not fully describe the true data-generating mechanism (Papadopoulos and Edwards 2001).…”
Section: Evaluation Of Neural Network Robustness Against Uncertaintymentioning
confidence: 99%
See 2 more Smart Citations
“…A strategy to minimize these deviations is to use such an ANN committee that the simple average of the ANN outputs is favourable as it gets closer to the expected value [14−17]. The idea behind a committee machine is the minimization of the random effects induced in the ANN due to the learning process [8,18]. It is possible to make a direct analogy with the metrology where, in order to determine the value for a measurand with the presence of random effects in the measurement process, the average of several measurements represents a better result than a single individual measurement [19].…”
Section: Committees Of Artificial Neural Networkmentioning
confidence: 99%