2022
DOI: 10.1016/j.procs.2022.09.160
|View full text |Cite
|
Sign up to set email alerts
|

A New Approach to Constructing Maximal Consistent Blocks for Mining Incomplete Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 11 publications
0
3
0
Order By: Relevance
“…The idea of the sequential update of the set of maximal consistent blocks based on Property 5 from [16] proposed in [25] is more effective than the commonly used method of constructing the blocks. However, the subsets merging, which removes subsets by performing comparisons between two sets (the current set of maximal consistent blocks obtained so far and the new one built for the next attribute), has a significant impact on the overall performance.…”
Section: Parallelization Of the Maximal Consistent Blocks Computationsmentioning
confidence: 99%
See 2 more Smart Citations
“…The idea of the sequential update of the set of maximal consistent blocks based on Property 5 from [16] proposed in [25] is more effective than the commonly used method of constructing the blocks. However, the subsets merging, which removes subsets by performing comparisons between two sets (the current set of maximal consistent blocks obtained so far and the new one built for the next attribute), has a significant impact on the overall performance.…”
Section: Parallelization Of the Maximal Consistent Blocks Computationsmentioning
confidence: 99%
“…In practice, however, the efficiency is reduced by the fact that p − 1 processes have to wait for the processes executing the slowest task in a batch. Therefore, in the worst case, the relative improvement compared to the sequential algorithm is O(n n For the demonstration of the real values of the efficiency, the abalone data set with all missing attribute values interpreted as "do not care" conditions was selected, due to the fact that during the experiments performed, using the sequential algorithm, this set turned out to be the most demanding one [25]. The analysis was conducted using 4, 6, 8, 12, 16, 20 and 32 processors.…”
Section: Parallelization Of the Maximal Consistent Blocks Computationsmentioning
confidence: 99%
See 1 more Smart Citation