2010
DOI: 10.5120/1564-1499
|View full text |Cite
|
Sign up to set email alerts
|

Attribute Reduction using Forward Selection and Relative Reduct Algorithm

Abstract: Attribute reduction of an information system is a key problem in rough set theory and its applications. Rough set theory has been one of the most successful methods used for feature selection. Rough set is one of the most useful data mining techniques. This paper proposes relative reduct to solve the attribute reduction problem in roughest theory. It is the most promising technique in the Rough set theory, a new mathematical approach to reduct car dataset using relative reduct algorithm. The redundant attribut… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 5 publications
0
4
0
Order By: Relevance
“…It was a model that started with zero variables (empty model) or no variables in the model, and then the variables were inserted one by one. Performance would be evaluated for each added variable, and only the highest-performing attributes were added to the selection for object functions until certain criteria were met [20].…”
Section: B Data Preprocessingmentioning
confidence: 99%
See 1 more Smart Citation
“…It was a model that started with zero variables (empty model) or no variables in the model, and then the variables were inserted one by one. Performance would be evaluated for each added variable, and only the highest-performing attributes were added to the selection for object functions until certain criteria were met [20].…”
Section: B Data Preprocessingmentioning
confidence: 99%
“…This method is a model that begins with zero variables (empty model) or no variables in the model, and then the variables are inserted one by one. The performance will be evaluated for each added variable, and only attributes with the highest performance are added to the selection for object functions until certain criteria are fulfilled [20]. The feature selection results are used to create a classification model of the Naïve Bayes algorithm to determine which feature selection method is better, more efficient, and more appropriate.…”
Section: Introductionmentioning
confidence: 99%
“…Various combinations of response variables are taken and their means are analyzed. Anova can be done considering a single variable which affect the dependant variable or more than one variable which affect a dependant variable, regarded as one-way Anova and two-way Anova respectively [10]. The difference comes in the number of factors which affect the response variable.…”
Section: Feature or Attribute Selectionmentioning
confidence: 99%
“…The major techniques for dimensionality reduction include feature selection and feature extraction where feature reduction refers to the mapping of the original high-dimensional data onto a lower-dimensional space [10]. Feature selection reduces number of features, removes irrelevant, redundant or noisy data and brings immediate results for applications [1].Feature selection selects a new set of attributes from the existing based on the extent to which the attribute or feature is relevant to the characteristics of the data of concern [21].…”
Section: Feature or Attribute Selectionmentioning
confidence: 99%