2016
DOI: 10.14445/23488387/ijcse-v3i10p111
|View full text |Cite
|
Sign up to set email alerts
|

Agerl Based Enhanced Map Reduce Technique in Cloud Scheduling

Abstract: Today's real time big data applications mostly rely on map-reduce (M-R) framework of Hadoop File System (HDFS). Hadoop makes the complexity of such applications in a simpler manner. This paper works on two goals: maximizing resource utilization and reducing the overall job completion time. Based on the goals proposed, we have developed Agent Centric Enhanced Reinforcement Learning Algorithm (AGERL) .The algorithm concentrates in four dimensions: variable partitioning of tasks, calculation of progress ratio of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 8 publications
0
2
0
Order By: Relevance
“…From the log, the holes should be mapped with the sub ensembles, at the same time deadline constraints are calculated, if it is satisfied send to queue for further processing and updating should be performed continuously else the ensembles need to be rejected. After estimating the budget and rejection [8] of the ensembles, the distance between the workflow present in the queue has been calculated for any two nodes with the help of the Manhattan distance(U m ) for a node which is computed as follows [14], = Manhautten Distance Manhautten Distance…”
Section: Proposed Workflow Execution Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…From the log, the holes should be mapped with the sub ensembles, at the same time deadline constraints are calculated, if it is satisfied send to queue for further processing and updating should be performed continuously else the ensembles need to be rejected. After estimating the budget and rejection [8] of the ensembles, the distance between the workflow present in the queue has been calculated for any two nodes with the help of the Manhattan distance(U m ) for a node which is computed as follows [14], = Manhautten Distance Manhautten Distance…”
Section: Proposed Workflow Execution Modelmentioning
confidence: 99%
“…After that the appropriated resources are provided to the genomic data, by forming the clusters. During the cluster estimation process [14], neighboring genomic information is gathered for reducing the dissimilarity information processing that indirectly enhances the processing time. The similar gene information is clustered which is then allocated to the virtual machine by implementing HFEL algorithm [15].…”
Section: Case Study and Analysis And Researchmentioning
confidence: 99%