2012
DOI: 10.1109/tfuzz.2011.2181180
|View full text |Cite
|
Sign up to set email alerts
|

On Robust Fuzzy Rough Set Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
55
0
1

Year Published

2013
2013
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 146 publications
(56 citation statements)
references
References 54 publications
0
55
0
1
Order By: Relevance
“…An important challenge is to extend the formal treatment to noise-tolerant fuzzy rough set models, such as those studied in [23][24][25][26][27][28][29]. Observing that the implicator-conjunctor based approximations are sensitive to small changes in the arguments (for instance, because of their reliance on inf and sup operations), many authors have proposed models that are more robust against data perturbation.…”
Section: Discussionmentioning
confidence: 99%
“…An important challenge is to extend the formal treatment to noise-tolerant fuzzy rough set models, such as those studied in [23][24][25][26][27][28][29]. Observing that the implicator-conjunctor based approximations are sensitive to small changes in the arguments (for instance, because of their reliance on inf and sup operations), many authors have proposed models that are more robust against data perturbation.…”
Section: Discussionmentioning
confidence: 99%
“…This can be a disadvantage in a data analysis context, since data samples may be erroneous. Such noisy data can perturb the approximations and therefore weaken the machine learning algorithms that invoke them [27].…”
Section: Robust Fuzzy Rough Set Modelsmentioning
confidence: 99%
“…Moreover, some models use other aggregation operators than the infimum and supremum operators [7,18]. To the best of our knowledge, the seven models we discuss here are the most widely used robust fuzzy rough set models [27,65].…”
Section: Robust Fuzzy Rough Set Modelsmentioning
confidence: 99%
“…Based on the outcome of the decision tree algorithm, either four (30,31,32,33) or five (22,30,31,32,33) features are removed in the constructed trees. The main reason for removal is that features are numerical and some are used repeatedly.…”
Section: Datasets and Selected Featuresmentioning
confidence: 99%