1995
DOI: 10.1080/01431169508954507
|View full text |Cite
|
Sign up to set email alerts
|

The effect of training set size and composition on artificial neural network classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
79
0
2

Year Published

1997
1997
2022
2022

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 165 publications
(84 citation statements)
references
References 15 publications
3
79
0
2
Order By: Relevance
“…As expected, the accuracies increased as more training data were used due to an increased likelihood of capturing spectral and temporal differences within and among percent tree levels [15][16][17]52]. The accuracy differences were negligible (≤0.07% RMSE) when the most training data (8% per tree) were used.…”
Section: Discussionsupporting
confidence: 58%
“…As expected, the accuracies increased as more training data were used due to an increased likelihood of capturing spectral and temporal differences within and among percent tree levels [15][16][17]52]. The accuracy differences were negligible (≤0.07% RMSE) when the most training data (8% per tree) were used.…”
Section: Discussionsupporting
confidence: 58%
“…The network's architecture and algorithm parameters were defined from an evaluation of several hundreds of candidate networks. Previous studies have shown that the training set, notably in terms of its size and composition can have a marked impact on the accuracy of classification by a neural network [34][35][36]. Moreover, it is apparent that the individual training cases vary in importance, with those lying close to the class borders most informative and helpful in determining the location of the classification hyperplanes [37].…”
Section: Data and Methods Of Classificationmentioning
confidence: 99%
“…The MLP neural network-a supervised model that uses single or multilayer perceptrons to approximate the inherent input-out-put relationships-is the most commonly used network model for image classification in remote sensing [26][27][28]. MLP networks are typically trained with the supervised backpropagation (BP) algorithm [23] and consist of one input layer, one or more hidden layers, and one output layer ( Figure 1).…”
Section: Neural Network Classification Approachesmentioning
confidence: 99%
“…Both [29] and [30] found that an enhanced neural network can be achieved by incorporating a momentum term (the past increment to the weight) to speed up and stabilize the BP learning. Although there are many examples of successful MLP applications [11,17,[26][27][28]31,32], it is widely recognized that MLPs are sensitive to many operational factors including the size and quality of the training data set, network architecture, training parameters, and over-fitting problems. These factors are application-dependent and best addressed on a case-by-case basis.…”
Section: Neural Network Classification Approachesmentioning
confidence: 99%