2014
DOI: 10.5121/ijaia.2014.5403
|View full text |Cite
|
Sign up to set email alerts
|

A Study on Rough Set Theory Based Dynamic Reduct for Classification System Optimization

Abstract: In the present day huge amount of data is generated in every minute and transferred frequently. Although the data is sometimes static but most commonly it is dynamic and transactional. New data that is being generated is getting constantly added to the old/existing data. To discover the knowledge from this incremental data, one approach is to run the algorithm repeatedly for the modified data sets which is time consuming. Again to analyze the datasets properly, construction of efficient classifier model is nec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 22 publications
(21 reference statements)
0
2
0
Order By: Relevance
“…Thus, for a decision system, DS = (U, A, L), a set P ⊆ A is called a reduct if (i) both P and A provide the same set of equivalence classes, and (ii) P is minimal, i.e., after removal of any feature a from P, P − {a} and A provide the different sets of equivalence classes. But, finding the exact reduct is an NP−hard problem, and in RST, an approximate solution is provided by the Quick Reduct generation algorithm [52,53]. In RST, we compute the dependency of L on a feature subset, say P ⊆ A, and this dependency (i.e., γ P (L)) is defined by Equation (7).…”
Section: Feature Selectionmentioning
confidence: 99%
“…Thus, for a decision system, DS = (U, A, L), a set P ⊆ A is called a reduct if (i) both P and A provide the same set of equivalence classes, and (ii) P is minimal, i.e., after removal of any feature a from P, P − {a} and A provide the different sets of equivalence classes. But, finding the exact reduct is an NP−hard problem, and in RST, an approximate solution is provided by the Quick Reduct generation algorithm [52,53]. In RST, we compute the dependency of L on a feature subset, say P ⊆ A, and this dependency (i.e., γ P (L)) is defined by Equation (7).…”
Section: Feature Selectionmentioning
confidence: 99%
“…Sengupta and Das [41] emphasized on knowledge discovery from incremental data and presented an algorithm to generate dynamic reduct using rough set theory. Discrete Particle Swarm Optimization (DPSO) Algorithm took advantage of the discernibility matrix and frequency value of features to divide these attributes into two categories i.e.…”
Section: Comparison Of State Of the Art Approachesmentioning
confidence: 99%