Third ACIS Int'l Conference on Software Engineering Research, Management and Applications (SERA'05) 2005
DOI: 10.1109/sera.2005.30
|View full text |Cite
|
Sign up to set email alerts
|

Combining classification improvements by ensemble processing

Abstract: The k-nearest neighbor (KNN) classification is a simple and effective classification approach. However, improving performance of the classifier is still attractive. Combining multiple classifiers is an effective technique for improving accuracy. There are many general combining algorithms, such as Bagging, Boosting, or Error Correcting Output Coding that significantly improve the classifier such as decision trees, rule learners, or neural networks. Unfortunately, these combining methods developed do not impro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2008
2008
2022
2022

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(7 citation statements)
references
References 7 publications
0
7
0
Order By: Relevance
“…According to the discussion about the desirable subsets, the number of the features in each subset should be large enough to get a reliable determination of the class label. A lower bound can be obtained using a simple feature reduction technique; for details see [9-10]. …”
Section: Proposed Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…According to the discussion about the desirable subsets, the number of the features in each subset should be large enough to get a reliable determination of the class label. A lower bound can be obtained using a simple feature reduction technique; for details see [9-10]. …”
Section: Proposed Methodsmentioning
confidence: 99%
“…Generally speaking, there are five classes of well-established strategies to deal with the missing values: 1) discard the incomplete samples (e.g., pairwise deletion [2]); 2) avoid the missing features by dynamic decisions (e.g., decision trees such as CART [7]); 3) recover unknown values from the similar samples (e.g., Expectation Maximization (EM) [8]); 4) insert possible values for the missing features, classify after each insertion and combine the classification results (e.g., Multiple Imputations (MI) [9]); and 5) design multiple classifiers on the subsets of the data and combine the classification results (e.g., ensemble classifiers [17]).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In this way, a robust decoding strategy is required to obtain accurate results. Several techniques for the binary decoding step have been proposed in the literature (Windeatt and Ghaderi, 2003) (Ishii et al, 2005) (Passerini et al, 2004) (Dekel and Singer, 2002), though the most common ones are the Hamming and the Euclidean approaches (Windeatt and Ghaderi, 2003). In the work of (Pujol et al, 2006), authors showed that usually the Euclidean distance was more suitable than the traditional Hamming distance in both the binary and the ternary cases.…”
Section: Decoding Designsmentioning
confidence: 99%
“…In (Windeatt and Ghaderi, 2003), Inverse Hamming Distance (IHD) and Centroid distance (CEN) for binary problems are introduced. Other decoding strategies for nominal, discrete and heterogeneous attributes have been proposed in (Ishii et al, 2005). With the introduction of the zero symbol, Allwein et al (Allwein et al, 2002) show the advantage of using a loss based function of the margin of the base classifier on the ternary ECOC.…”
Section: Introductionmentioning
confidence: 99%