2016
DOI: 10.1016/j.knosys.2015.10.025
|View full text |Cite
|
Sign up to set email alerts
|

Rough sets in distributed decision information systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 36 publications
(6 citation statements)
references
References 50 publications
0
6
0
Order By: Relevance
“…Within another context, the context of limited labeled big data, in [32], authors introduced a theoretic framework called local rough set and developed a series of corresponding concept approximation and attribute reduction algorithms with linear time complexity, which can efficiently and effectively work in limited labeled big data. In the context of distributed decision information systems, i.e., several separate data sets dealing with different contents/topics but concerning the same data items, in [19], authors proposed a distributed definition of rough sets to deal with the reduction of these information systems.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Within another context, the context of limited labeled big data, in [32], authors introduced a theoretic framework called local rough set and developed a series of corresponding concept approximation and attribute reduction algorithms with linear time complexity, which can efficiently and effectively work in limited labeled big data. In the context of distributed decision information systems, i.e., several separate data sets dealing with different contents/topics but concerning the same data items, in [19], authors proposed a distributed definition of rough sets to deal with the reduction of these information systems.…”
Section: Literature Reviewmentioning
confidence: 99%
“…The rough set theory is widely known as a reasonable and efficient soft computing method for handling several decision making situations via attribute selections and rule acquisitions; see [19,27,28]. Moreover, in the past decades, various generalized rough set models have been constructed in step with the actual demands of real-world situations; see [26,30].…”
Section: Introductionmentioning
confidence: 99%
“…The first one is to process big data in an allowable time [16] [27]. The second one is to omit the redundant and unnecessary features within the data that may impede classification ability and increase the consumption of computation and memory storage [1] [18]. Running parallel algorithms on a cluster is one common way to handle a large amount of data when it cannot be processed on a single computer.…”
Section: Introductionmentioning
confidence: 99%