2007
DOI: 10.1016/j.knosys.2006.08.005
|View full text |Cite
|
Sign up to set email alerts
|

BitTableFI: An efficient mining frequent itemsets algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
52
0
3

Year Published

2010
2010
2018
2018

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 110 publications
(55 citation statements)
references
References 16 publications
0
52
0
3
Order By: Relevance
“…According to the literature, besides Agarwal and Srikant themselves [1], many researchers endeavored in improving Apriori algorithm [9][10][11][12][13]. For instance, Park et al proposed Direct Hashing and Pruning algorithm that utilizes a hash approach to geneate candidate itemsets [9].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…According to the literature, besides Agarwal and Srikant themselves [1], many researchers endeavored in improving Apriori algorithm [9][10][11][12][13]. For instance, Park et al proposed Direct Hashing and Pruning algorithm that utilizes a hash approach to geneate candidate itemsets [9].…”
Section: Related Workmentioning
confidence: 99%
“…Sampling is a powerful data reduction technique that has been applied to a variety of problems in database systems [11]. Dong and Han proposed an algorithm named as BitTableFI, which compresses the database into BitTable, and candidate itemsets generation and support count can be performed quickly with the special data structure [12]. The I-Apriori algorithm, proposed by Bhandari et al in 2015, is a combination of FP-tree and Apriori algorithm.…”
Section: Related Workmentioning
confidence: 99%
“…Dong [5] designed a BitTableFI algorithm to compress database for quick candidate itemsets generation. After that, the Index-BitTableFI algorithm is proposed by Song [6] , to avoid the redundant operations on frequent itemsets checking.…”
Section: Related Workmentioning
confidence: 99%
“…To evaluate the efficiency of this proposed algorithm, experiments are carried out among Apriori, CBAR [4] , BitTableFI [6] and MIbARM with two synthetic datasets, D10K.T10.I5.N5 and D50K.T20.I10.N5, which are provided by IBM Almaden Quest research group [3,5] The running time of those four algorithms on two synthetic datasets is shown in Fig.1, with Fig.2 and Fig.3 From Fig.2 and Fig.3, the rule number of interestingness-based MIbARM is less than that of Apriori based on support and confidence, under 64 different conditions of parameter combinations. In Apriori, a mass of redundant rules are conducted with rapid decreasing of the Sup min , which makes the number of association rules increase exponentially.…”
Section: Performance Evaluationmentioning
confidence: 99%
“…For this work, mining frequent itemset, researched first by Agrawal et al [1] in 1993, has become more and more important and many new algorithms or improvements have been proposed to solve the problem more efficiently, such as Eclat [36], FP-Growth [18], FPGrowth* [14], BitTable-FI [12] and Index-BitTableFI [29].…”
Section: Introductionmentioning
confidence: 99%