2014
DOI: 10.1007/978-3-319-11692-1_15
|View full text |Cite
|
Sign up to set email alerts
|

Rule Based Classification on a Multi Node Scalable Hadoop Cluster

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 7 publications
0
4
0
Order By: Relevance
“…Figure 9a shows the workflow execution time for three different Hadoop applications using the Single Node Method. The applications used were: WordCount -the standard example that comes with Hadoop, Rule Based Classification [17]-a classification algorithm adapted for MapReduce, and Prefix Span [16] -MapReduce version of the popular sequential pattern mining algorithm. This experiment was executed to show that the method developed is generic and could be used for a wide range of Hadoop applications.…”
Section: Results and Evaluationmentioning
confidence: 99%
“…Figure 9a shows the workflow execution time for three different Hadoop applications using the Single Node Method. The applications used were: WordCount -the standard example that comes with Hadoop, Rule Based Classification [17]-a classification algorithm adapted for MapReduce, and Prefix Span [16] -MapReduce version of the popular sequential pattern mining algorithm. This experiment was executed to show that the method developed is generic and could be used for a wide range of Hadoop applications.…”
Section: Results and Evaluationmentioning
confidence: 99%
“…MapReduce is a Java-based software framework that lets programmers run the same computation on multiple machines at the same time to process data faster, reliable, fault-tolerant manner and more efficiently. Apache Hadoop is widely used in science because it works well with the MapReduce programming language and is free to use [11,12]. With the help of the Google File System (GFS), the information is broken up into smaller pieces and sent to many computers.…”
Section: Mapreduce Frameworkmentioning
confidence: 99%
“…If you want to cluster similar textures of images or regions, you may use the "mean μmn" shown in (11) and "standard deviation σmn" given in (12) to represent the region's homogeneous texture feature:…”
Section: Texture Featurementioning
confidence: 99%
“…The MapReduce framework can be used to distribute the feature selection task [14], [15]. The MapReduce framework [16] was successfully tested on very large datasets from different domains [17], [18], [19]. The most known implementations of MapReduce are Hadoop and MongoDB.…”
Section: The Computational Time Problemmentioning
confidence: 99%