2018
DOI: 10.1016/j.knosys.2018.03.015
|View full text |Cite
|
Sign up to set email alerts
|

Attribute reduction based on max-decision neighborhood rough set model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
24
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 67 publications
(24 citation statements)
references
References 54 publications
0
24
0
Order By: Relevance
“…The main purpose of attribute reduction is based on neighborhood rough set to eliminate redundant attributes classified data and extract useful information. Firms' business methods are changing from product-centric to (1) Singular value decomposition: This method has a good effect when the data dimension is high, and it is often used as a preprocessing method to realize the convergence of fuzzy rough reduction in high-dimensional data sets; but the calculation cost of this method is high [13,14]; (2) The principal component analysis is reduced, and the mutual influence between evaluation indexes is eliminated by replacing the original variables with several principal components with larger contributions. This study is necessary to delete irrelevant or unimportant attributes to eliminate the interference of irrelevant features when using the data with higher dimensions [15]; (3) Effective feature extraction in deep learning: Data-driven deep learning analysis has been developed and applied in many fields.…”
Section: Introductionmentioning
confidence: 99%
“…The main purpose of attribute reduction is based on neighborhood rough set to eliminate redundant attributes classified data and extract useful information. Firms' business methods are changing from product-centric to (1) Singular value decomposition: This method has a good effect when the data dimension is high, and it is often used as a preprocessing method to realize the convergence of fuzzy rough reduction in high-dimensional data sets; but the calculation cost of this method is high [13,14]; (2) The principal component analysis is reduced, and the mutual influence between evaluation indexes is eliminated by replacing the original variables with several principal components with larger contributions. This study is necessary to delete irrelevant or unimportant attributes to eliminate the interference of irrelevant features when using the data with higher dimensions [15]; (3) Effective feature extraction in deep learning: Data-driven deep learning analysis has been developed and applied in many fields.…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, the time complexity of ARNRJE algorithm is approximately O ( mn ). So far, ARNRJE appears to be more efficient than some of the existing algorithms for attribute reduction in [ 18 , 36 , 45 , 52 , 53 ] for neighborhood decision systems. Furthermore, its space complexity is O ( mn ).…”
Section: Attribute Reduction Using Lebesgue and Entropy Measures Imentioning
confidence: 99%
“…Sun and Xu [ 35 ] proposed a positive region-based granular space for feature selection based on rough sets. Nevertheless, the positive region in these models only draws attention to the consistent samples whose similarity classes are completely contained in some decision classes [ 36 ]. Meng et al [ 37 ] presented an intersection neighborhood for numerical data, and designed a gene selection algorithm using positive region and gene ranking based on neighborhood rough sets.…”
Section: Introductionmentioning
confidence: 99%
“…An attribute reduction method [10] was proposed based on Max-Decision Neighborhood Rough Set model (MDNRS) for attribute reduction. This method focused on the boundary samples and enlarged the positive region by adding the samples whose neighborhoods have a maximal intersection with some decision classes.…”
Section: Literature Surveymentioning
confidence: 99%