1996
DOI: 10.1016/0925-2312(95)00101-8
|View full text |Cite
|
Sign up to set email alerts
|

Minimum mean square estimation and neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

1998
1998
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 11 publications
(3 citation statements)
references
References 29 publications
0
2
0
Order By: Relevance
“…In a loosely defined sense, ANN classification is a process of searching optimal solution of weight vector that minimizes the sum of squared errors between network and desired output responses. Manry et al showed that a neural network can approximate the minimum mean square estimator arbitrarily well, provided that it is of adequate size and is well-trained [10]. In this study, by using trial and error, two hidden layers were used, and four nods were adopted.…”
mentioning
confidence: 99%
“…In a loosely defined sense, ANN classification is a process of searching optimal solution of weight vector that minimizes the sum of squared errors between network and desired output responses. Manry et al showed that a neural network can approximate the minimum mean square estimator arbitrarily well, provided that it is of adequate size and is well-trained [10]. In this study, by using trial and error, two hidden layers were used, and four nods were adopted.…”
mentioning
confidence: 99%
“…It has the universal approximation property [3]. With proper training, the MLP approximates the Bayes classifier [4] or the minimum mean square error (mmse) estimator [5]. The MLP has found use in many applications including character recognition [6] [7], power load forecasting [8], prognostics [9], well logging [10], and data mining [11].…”
Section: Introductionmentioning
confidence: 99%
“…The relatively short evaluation time make it a better choice than many other classification and approximation methods such as support vector machines (SVM) [35]. The universal approximation [9] property of the MLP along with its ability to mimic Bayes discriminant [10], optimal L2 norm estimates [8] and maximum a-posteriori(MAP) [11] estimates…”
mentioning
confidence: 99%