2015 IEEE Trustcom/BigDataSE/Ispa 2015
DOI: 10.1109/trustcom.2015.577
|View full text |Cite
|
Sign up to set email alerts
|

A MapReduce-Based k-Nearest Neighbor Approach for Big Data Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
49
0
5

Year Published

2016
2016
2021
2021

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 76 publications
(54 citation statements)
references
References 10 publications
0
49
0
5
Order By: Relevance
“…This approach iteratively performs MapReduce for every single test instance, with the consequent time consumption of Hadoop-based systems for iterations. In [28], however, we proposed a single Hadoop MapReduce process that can simultaneously clas-A C C E P T E D M A N U S C R I P T sify large amounts of test samples against a big training dataset, avoiding start-up costs of Hadoop. To do so, we read the test set line by line from the Hadoop File System, which make this model fully scalable but its performance can be further improved by in-memory solutions.…”
Section: A C C E P T E D Mmentioning
confidence: 99%
See 3 more Smart Citations
“…This approach iteratively performs MapReduce for every single test instance, with the consequent time consumption of Hadoop-based systems for iterations. In [28], however, we proposed a single Hadoop MapReduce process that can simultaneously clas-A C C E P T E D M A N U S C R I P T sify large amounts of test samples against a big training dataset, avoiding start-up costs of Hadoop. To do so, we read the test set line by line from the Hadoop File System, which make this model fully scalable but its performance can be further improved by in-memory solutions.…”
Section: A C C E P T E D Mmentioning
confidence: 99%
“…• We extend the MapReduce scheme proposed in [28] by using multiples reducers to speed up the processing when the number of maps needed is very high.…”
Section: A C C E P T E D Mmentioning
confidence: 99%
See 2 more Smart Citations
“…In [19] authors proposed an approach of k-NN using Hadoop. First, it splits the T R set and it is distributed over the computing nodes.…”
Section: B K-nn Design For Hadoop and Sparkmentioning
confidence: 99%