1993
DOI: 10.1111/j.1540-5915.1993.tb00491.x
|View full text |Cite
|
Sign up to set email alerts
|

Two‐Group Classification Using Neural Networks*

Abstract: Artificial neural networks are new methods for classification. We investigate two important issues in building neural network models; network architecture and size of training samples.Experiments were designed and carried out on two-group classification problems to find answers to these model building questions. The first experiment deals with selection of architecture and sample size for different classification problems. Results show that choice of architecture and choice of sample size depend on the objecti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
84
0
3

Year Published

1999
1999
2017
2017

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 164 publications
(88 citation statements)
references
References 27 publications
1
84
0
3
Order By: Relevance
“…Recently, Hung and Denton [27] and Subramanian and Hung [59] have proposed to use a general-purpose nonlinear optimizer, GRG2, in training neural networks. The bene®ts of GRG2 have been reported in the literature for many classi®cation problems [35,42,59]. This study uses a GRG2 based system to train neural networks.…”
Section: Neural Networkmentioning
confidence: 99%
See 2 more Smart Citations
“…Recently, Hung and Denton [27] and Subramanian and Hung [59] have proposed to use a general-purpose nonlinear optimizer, GRG2, in training neural networks. The bene®ts of GRG2 have been reported in the literature for many classi®cation problems [35,42,59]. This study uses a GRG2 based system to train neural networks.…”
Section: Neural Networkmentioning
confidence: 99%
“…If the objective is to classify a given set of observations in the training sample as well as possible, a larger network may be desirable. On the other hand, if the network is used to predict classi®cation of unseen objects in the test sample, then a larger network is not necessarily appropriate [42]. To see the eect of hidden nodes on the performance of neural network classi®ers, we use 15 dierent levels of hidden nodes ranging from 1 to 15 in this study.…”
Section: Design Of Neural Network Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…Alternatively, using too many neurons would increase training time and/or result in ANN losing its generalization attribute. In this study, number of nodes in the hidden layer is chosen as n 2 = 2 × n 1 + 1 (Patuwo et al, 1993), where n 1 is number of input variables. Then, the output of the ANN can be written as:…”
Section: Architecture Of Annmentioning
confidence: 99%
“…Multilayer perceptron have been applied successfully to model classifier using backpropagation algorithm for approximating all linearly and non-linearly separable complex functions. Multilayer perceptron has popularity in solving several business and scientific problems that involve prediction, and has also a wide range of applications in the classification problems (Denton, Hung, & Osyk [7], Holmstrom, Koistinen, Laaksonen, & Oja [8]; Mangiameli West, [9]; Patwo, Hu, & Hung [10]; Pendharkar [11]). But MLP has major problem of getting stuck at the local optimal solutions during training and the weights of the network layers are updated through error backpropagation is time consuming.…”
Section: Introductionmentioning
confidence: 99%