1997
DOI: 10.1016/s0893-6080(96)00086-x
|View full text |Cite
|
Sign up to set email alerts
|

Two Strategies to Avoid Overfitting in Feedforward Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
45
0

Year Published

1999
1999
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 115 publications
(45 citation statements)
references
References 12 publications
0
45
0
Order By: Relevance
“…Multilayer perceptron (MLP) neural networks are universal function approximators (Lek & Guegan, 1999). With enough twisting of the parameters, one can use them to implement any classification rule (Schittenkopf, Deco, & Brauer, 1997). Consequently, the networks could in theory also be designed to explain the version of the dataset in which the GPA scores would be randomly assigned to the students.…”
Section: Fitting To the Test Datamentioning
confidence: 99%
“…Multilayer perceptron (MLP) neural networks are universal function approximators (Lek & Guegan, 1999). With enough twisting of the parameters, one can use them to implement any classification rule (Schittenkopf, Deco, & Brauer, 1997). Consequently, the networks could in theory also be designed to explain the version of the dataset in which the GPA scores would be randomly assigned to the students.…”
Section: Fitting To the Test Datamentioning
confidence: 99%
“…In general, this type of modeling works well if the training data presented to the ANFIS is fully representative of the features of the data the trained system is intended to model. Checking and testing data sets allow checking the generalization capability of the resulting fuzzy inference system and avoiding overfitting [51].…”
Section: Anfis Architecturementioning
confidence: 99%
“…One of this research objectives is to prevent overfitting (Schittenkopf et al, 1997). The prediction procedure thus utilizes a PCA and a K-Means clustering in order to optimize the network training.…”
Section: Neural Network Structurementioning
confidence: 99%
“…So the network will be able to focus on the important features in the data rather than fitting the noise, an inherent component of any environmental field data set (Schittenkopf et al, 1997;El-Din and Smith, 2002).…”
Section: Article In Pressmentioning
confidence: 99%