2006
DOI: 10.1002/nav.20154
|View full text |Cite
|
Sign up to set email alerts
|

A misclassification cost‐minimizing evolutionary–neural classification approach

Abstract: Machine learning algorithms that incorporate misclassification costs have recently received considerable attention. In this paper, we use the principles of evolution to develop and test an evolutionary/genetic algorithm (GA)-based neural approach that incorporates asymmetric Type I and Type II error costs. Using simulated, real-world medical and financial data sets, we compare the results of the proposed approach with other statistical, mathematical, and machine learning approaches, which include statistical l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2007
2007
2020
2020

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 28 publications
(18 citation statements)
references
References 49 publications
0
18
0
Order By: Relevance
“…The data for 1987–92 were treated as the training sample, while the data for 1993–95 were the testing sample. The description of our data can be found in our previous study (Pendharkar & Nanda, 2006). To predict bankruptcy, we used financial ratios, which were earnings before interest and taxes/interest expense, earnings before interest and taxes/assets, current assets/current liabilities, retained earnings/assets, and market value of equity/book value of debt.…”
Section: Data Experiments and Resultsmentioning
confidence: 99%
“…The data for 1987–92 were treated as the training sample, while the data for 1993–95 were the testing sample. The description of our data can be found in our previous study (Pendharkar & Nanda, 2006). To predict bankruptcy, we used financial ratios, which were earnings before interest and taxes/interest expense, earnings before interest and taxes/assets, current assets/current liabilities, retained earnings/assets, and market value of equity/book value of debt.…”
Section: Data Experiments and Resultsmentioning
confidence: 99%
“…Zheng and Padmanabhan (2006) have recently extended this idea of 'active learning' to the cost-effective acquisition of additional data to enhance classification performance. So far, only a few OR contributions have linked asymmetric costs or imbalanced data sets routinely found in OR applications to the methods and processes of DM (Viaene and Dedene, 2005;Janssens et al, 2006;Pendharkar and Nanda, 2006).…”
Section: Computer-intensive Methodsmentioning
confidence: 99%
“…Regression tree models are more efficient than genetic programming and neural network models [37]. In a study comparing a regression tree model and neural networks for forecasting software effort, Srinivasan and Fisher found that the former performs as well as the latter.…”
Section: Data Collection and Analysismentioning
confidence: 98%