2013
DOI: 10.1007/s10844-012-0232-5
|View full text |Cite
|
Sign up to set email alerts
|

BruteSuppression: a size reduction method for Apriori rule sets

Abstract: Association rule mining can provide genuine insight into the data being analysed; however, rule sets can be extremely large, and therefore difficult and timeconsuming for the user to interpret. We propose reducing the size of Apriori rule sets by removing overlapping rules, and compare this approach with two standard methods for reducing rule set size: increasing the minimum confidence parameter, and increasing the minimum antecedent support parameter. We evaluate the rule sets in terms of confidence and cover… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(4 citation statements)
references
References 28 publications
0
4
0
Order By: Relevance
“…Combined with statistical information mining methods, the effect evaluation of college English teaching innovation reform is evaluated. 8 The grade X ð0Þ of the effect evaluation of the innovation reform is divided into Ngrades, which is X ð1Þ ; X ð2Þ ; Á Á Á ; X ðNÞ , and X ð0Þ ¼ [ N i¼1 X ðiÞ . The statistical analysis and optimization evaluation of the innovative reform effect of college English teaching are carried out by rough set algorithm.…”
Section: Information Sampling and Feature Analysismentioning
confidence: 99%
“…Combined with statistical information mining methods, the effect evaluation of college English teaching innovation reform is evaluated. 8 The grade X ð0Þ of the effect evaluation of the innovation reform is divided into Ngrades, which is X ð1Þ ; X ð2Þ ; Á Á Á ; X ðNÞ , and X ð0Þ ¼ [ N i¼1 X ðiÞ . The statistical analysis and optimization evaluation of the innovative reform effect of college English teaching are carried out by rough set algorithm.…”
Section: Information Sampling and Feature Analysismentioning
confidence: 99%
“…, n, which carry out indexing instruction control. Through weight vector coding, under K-means clustering, 8,9 the power level index information of attribute set of mining output of higher school sports data is as follows…”
Section: R E T R a C T E Dmentioning
confidence: 99%
“…But this method is not a proper method as it has many disadvantages related to it. So, here we are using an algorithm called brute suppression for decreasing the rule set's size which is based on eliminating the rules that is similar to other rules [4]. The idea of this method is that if rules are alike, we can keep one and delete the rest [5].…”
Section: Literature Surveymentioning
confidence: 99%
“…But there are two drawbacks to this method; firstly, some important rules in dominant positions having various cases maybe eliminated and secondly, there are no methods to select the most optimal threshold value. Here, we have done a process in which if two rules are similar, we will delete one of them [4].…”
Section: Introductionmentioning
confidence: 99%