The 3rd International Conference on Information Sciences and Interaction Sciences 2010
DOI: 10.1109/icicis.2010.5534718
|View full text |Cite
|
Sign up to set email alerts
|

MapReduce as a programming model for association rules algorithm on Hadoop

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
33
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 95 publications
(33 citation statements)
references
References 5 publications
0
33
0
Order By: Relevance
“…Especially when frequent item sets are higher or need to have a date mining update, the algorithm has higher efficiency and feasibility. In [18], an improved Apriori algorithm based on MapReduce mode is described, which can handle massive datasets with a large number of nodes on Hadoop platform. In [19], authors realize the parallelization of Apriori algorithm for the massive Web log mining, and verify the efficiency of Apriori algorithm which has been improved by parallelization.…”
Section: Introductionmentioning
confidence: 99%
“…Especially when frequent item sets are higher or need to have a date mining update, the algorithm has higher efficiency and feasibility. In [18], an improved Apriori algorithm based on MapReduce mode is described, which can handle massive datasets with a large number of nodes on Hadoop platform. In [19], authors realize the parallelization of Apriori algorithm for the massive Web log mining, and verify the efficiency of Apriori algorithm which has been improved by parallelization.…”
Section: Introductionmentioning
confidence: 99%
“…Several studies [17,28,45,27,20,23] have been done for mining frequent patterns in distributed environments, inspired by the MapReduce framework proposed by…”
Section: Related Workmentioning
confidence: 99%
“…Some of them [17,28,45] use a naive approach which computes the support of every itemset in the dataset in a single MapReduce round, resulting in huge data replication. An adaption of FP-Growth algorithm to MapReduce, called PFP [27], is a more sophisticated approach.…”
Section: Related Workmentioning
confidence: 99%
“…Transactions are allocated to Mappers and frequent k-itemsets are extracted from each Mapper before the results are shuffled through combiners and the final k-itemsets are extracted according to support and confidence thresholds. In Oruganti et al [4], Kovacs et al [5], Li et al [6], Mappers and Reducers are used, but in [7,8], in addition to Mappers and Reducers, Combiners are used for better shuffling and to address performance issues.…”
Section: Related Workmentioning
confidence: 99%