2018
DOI: 10.14419/ijet.v7i3.31.18202
|View full text |Cite
|
Sign up to set email alerts
|

A Study on Big Data Hadoop Map Reduce Job Scheduling

Abstract: A latest tera to zeta era has been created during huge volume of data sets, which keep on collected from different social networks, machine to machine devices, google, yahoo, sensors etc. called as big data. Because day by day double the data storage size, data processing power, data availability and digital world data size in zeta bytes. Apache Hadoop is latest market weapon to handle huge volume of data sets by its most popular components like hdfs and mapreduce, to achieve an efficient storage ability and e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 12 publications
0
2
0
Order By: Relevance
“…This scheduling is added by means of enhancing Map Reduce with data locality to improve its performance and the quickest response time for the Map assignment [43]. The reallocation of resources is accomplished by eradicating the pending tasks of preceding tasks in order to manage a slot for that task and waiting for a task to be accomplished in the assigned slots for the latest task [3]. If data for a task is not present, the task tracker will wait for a predetermined amount of time.…”
Section: Delay Schedulermentioning
confidence: 99%
See 1 more Smart Citation
“…This scheduling is added by means of enhancing Map Reduce with data locality to improve its performance and the quickest response time for the Map assignment [43]. The reallocation of resources is accomplished by eradicating the pending tasks of preceding tasks in order to manage a slot for that task and waiting for a task to be accomplished in the assigned slots for the latest task [3]. If data for a task is not present, the task tracker will wait for a predetermined amount of time.…”
Section: Delay Schedulermentioning
confidence: 99%
“…These data sources like the IT sector, data in the form of media streams, transactional information from enterprise applications, electronic gadgets, files, and many more [2]. The characteristics of big data are known as "V" challenges, later these "V" challenges were expanded to include a large number of V's: Velocity, Variety, Volume, Veracity, Variability, Value, Verification, Vulnerability [3,4]. The management of these highly characterized data is arduous for traditional database systems.…”
Section: Introductionmentioning
confidence: 99%