Proceedings of ICNN'95 - International Conference on Neural Networks
DOI: 10.1109/icnn.1995.488119
|View full text |Cite
|
Sign up to set email alerts
|

A comparison of criterion functions for a neural network applied to binary detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
11
0

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 14 publications
(11 citation statements)
references
References 7 publications
0
11
0
Order By: Relevance
“…To illustrate the use of IS techniques in the training phase of a NN, let us consider the misclassification probability as an objective function for applications in classifications (or the error probability for detection in communications [11][12][13]). According to the notation given above, the error probability (P e ) can be expressed as follows…”
Section: Fig 1 Binary Detector Structure 2 Error Probability As Objmentioning
confidence: 99%
See 1 more Smart Citation
“…To illustrate the use of IS techniques in the training phase of a NN, let us consider the misclassification probability as an objective function for applications in classifications (or the error probability for detection in communications [11][12][13]). According to the notation given above, the error probability (P e ) can be expressed as follows…”
Section: Fig 1 Binary Detector Structure 2 Error Probability As Objmentioning
confidence: 99%
“…depends on a parameter q (e.g. the signal-to-noise ratio) [12,13] and we can write (11) where q * is the q-value that minimizes the variance (6) of the estimator The optimal q * -value is obtained experimentally by computing an estimator of (6). An estimator of (6) is given by…”
Section: Fig 1 Binary Detector Structure 2 Error Probability As Objmentioning
confidence: 99%
“…Gandhi et al [4,5] and J.L. Sanz and D. Andina [6,7] applied multilayer perceptrons with one output to approximate the Neyman-Pearson detector. Once the neural network had been trained, they proposed to vary the detection threshold attending to P F A requirements.…”
Section: Introductionmentioning
confidence: 99%
“…Once the neural network had been trained, they proposed to vary the detection threshold attending to P F A requirements. In [4,5] the LMSE criterion was used, while in [6,7] the LMSE and cross-entropy errors were used, among others. For desired outputs 1 for hypothesis H 1 and 0 for hypothesis H 0 , their works are based on the assumption that the network converges to an "operating point" that minimizes the probability of error over the training set.…”
Section: Introductionmentioning
confidence: 99%
“…The threshold is used to fix the desired P FA . This scheme has been used previously is several works [1,[18][19][20]. An equivalent implementation consist in varying the bias of the output neuron [21,22].…”
Section: Introductionmentioning
confidence: 99%