1998
DOI: 10.1007/978-1-4615-5689-3
|View full text |Cite
|
Sign up to set email alerts
|

Feature Selection for Knowledge Discovery and Data Mining

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

2
1,007
0
44

Year Published

2002
2002
2015
2015

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 1,454 publications
(1,053 citation statements)
references
References 0 publications
2
1,007
0
44
Order By: Relevance
“…We have taken our original pattern extraction method [11] for the case study described in Section IV. Attribute selection procedures are done by selecting relevant attributes manually or using attribute selection algorithms [7].…”
Section: A Procedures To Mine Time-series Rulesmentioning
confidence: 99%
“…We have taken our original pattern extraction method [11] for the case study described in Section IV. Attribute selection procedures are done by selecting relevant attributes manually or using attribute selection algorithms [7].…”
Section: A Procedures To Mine Time-series Rulesmentioning
confidence: 99%
“…As in any other prediction method, classification trees have their predictive accuracy greatly affected by inconsistencies within the training dataset (LAGACHERIE & HOLMES, 1997). LIU & MOTODA (1998) contended that handling large datasets in a classification tree approach could be inefficient in terms of learning time and prediction accuracy, and proposed instance selection, a branch of statistical learning research, to handle datasets containing redundant and/or noisy instances as well as multi-collinearity. Instance selection is applied to reduce noise and redundant information in the whole dataset.…”
Section: Introductionmentioning
confidence: 99%
“…The challenge is to extract a representative subset that is small enough that can be handled easily by learning algorithms but still large enough that no relevant information is lost in the process. The main goal of instance selection is to reasonably reduce large datasets for faster predictions while preserving or even increasing accuracy (BUI et al, 1999;LIU & MOTODA, 1998). Advancing this research topic, SCHMIDT et al (2008) suggested that spatially constrained instance selection should be investigated in future pedometric research "focusing on the boundaries instead of concentrating on the more homogeneous core of the class areas".…”
Section: Introductionmentioning
confidence: 99%
“…This "curse of dimensionality", along with the expense of measuring additional features, motivates feature dimensionality reduction. Though no known deterministic algorithm finds the optimal feature set for a classification task, a wide range of feature selection algorithms may find near-optimal feature sets [2].…”
Section: Introductionmentioning
confidence: 99%