2003
DOI: 10.1080/0143116031000103781
|View full text |Cite
|
Sign up to set email alerts
|

Radial Basis Function and Multilayer Perceptron neural networks for sea water optically active parameter estimation in case II waters: A comparison

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2006
2006
2021
2021

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 32 publications
(10 citation statements)
references
References 17 publications
0
10
0
Order By: Relevance
“…[3,16], respectively. RBFs have been extensively used as non-linear regression models because of their simple topological structure and speed of the training phase.…”
Section: Resultsmentioning
confidence: 98%
See 1 more Smart Citation
“…[3,16], respectively. RBFs have been extensively used as non-linear regression models because of their simple topological structure and speed of the training phase.…”
Section: Resultsmentioning
confidence: 98%
“…Finally, we show that the second-order TS models achieve approximation performance considerably superior to the RBF networks and the MLP networks proposed in Refs. [3,16], and applied to the same simulated data.…”
Section: Introductionmentioning
confidence: 99%
“…However, the logistic sigmoid and hyperbolic tangent functions are the functions most commonly used with the MLP neural networks (Dawson and Wilby, 2001;Maier and Dandy, 2000) while the Gaussian function is the one most commonly used with the RBF neural networks (Corsini et al, 2003;Dawson and Wilby, 2001). The logistic sigmoid function is particularly appealing when the raw data have outliers because this function reduces the effects of extreme input values on the performance of the network and hence extreme values will have no extreme effects on the network outputs (Hill et al, 1994).…”
Section: Optimization Of Network Architecturementioning
confidence: 99%
“…To efficiently find a reasonable learning rate, networks are trained and the outputs of the validation dataset are evaluated (Bruzzone et al, 2004;Corsini et al, 2003). During this process, the ANN model is trained and continually optimized against the cross-validation dataset so that the network structure can be assessed by using the error values produced during training on the cross-validation data.…”
Section: Optimization Of Network Structurementioning
confidence: 99%
“…Their approximations are smooth and continuous, and their accuracy increases with increasing numbers of nodes in the hidden layers. The benefits and limitations of MLP networks have become increasingly apparent and the results of comparative studies in diversified domains are now available (Corsini et al, 2003;Jayawardena et al, 1997). MLP is highly nonlinear in its parameters.…”
Section: Introductionmentioning
confidence: 99%