2022
DOI: 10.23919/jcn.2021.000041
|View full text |Cite
|
Sign up to set email alerts
|

Reinforcement learning based resource management for fog computing environment: Literature review, challenges, and open issues

Abstract: Article that has been accepted for inclusion in a future issue of a journal. Content is final as presented, with the exception of pagination.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 58 publications
(22 citation statements)
references
References 78 publications
0
22
0
Order By: Relevance
“…. , F n 􏼈 􏼉" Output: Energy minimized optimal resource allocation (1) Begin (2) For each dataset "DS" with task "T" and fog nodes "F" //State space (3) Load balancer acquires the input from fog environment "Inp � DS i , WT i , QL i 􏼈 􏼉" (4) Mathematically formulate data size as in equation ( 16) (5) Mathematically formulate waiting time as in equation ( 17) (6) Mathematically formulate queue length as in equation (18) //Action space (7) For each action "A" with the consolidated state "S" (8) If task "T" generated by fog node "F i " is executed locally (9) en "Y i loc � 1" (10) Else "Y i loc � 0" (11) End if (12) If Task "T" generated by fog node "F i " is executed on the host node (13) en "Y F ij � 1" (14) Else "Y F ij � 0" (15) End if (16) If Task "T" generated by fog node "F i " is executed by neighbor (17) en"Y F ijk � 1" (18) Else "Y F ijk � 0" (19) End if //Reward function (20) For each action "A" with the consolidated state "S" and task "T" generated by fog node "F i " (21) Total all the obtained rewards as in equation ( 19) (22) Measure stochastic bellman gradient optimality function as in equation ( 20…”
Section: Experimental Evaluation and Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…. , F n 􏼈 􏼉" Output: Energy minimized optimal resource allocation (1) Begin (2) For each dataset "DS" with task "T" and fog nodes "F" //State space (3) Load balancer acquires the input from fog environment "Inp � DS i , WT i , QL i 􏼈 􏼉" (4) Mathematically formulate data size as in equation ( 16) (5) Mathematically formulate waiting time as in equation ( 17) (6) Mathematically formulate queue length as in equation (18) //Action space (7) For each action "A" with the consolidated state "S" (8) If task "T" generated by fog node "F i " is executed locally (9) en "Y i loc � 1" (10) Else "Y i loc � 0" (11) End if (12) If Task "T" generated by fog node "F i " is executed on the host node (13) en "Y F ij � 1" (14) Else "Y F ij � 0" (15) End if (16) If Task "T" generated by fog node "F i " is executed by neighbor (17) en"Y F ijk � 1" (18) Else "Y F ijk � 0" (19) End if //Reward function (20) For each action "A" with the consolidated state "S" and task "T" generated by fog node "F i " (21) Total all the obtained rewards as in equation ( 19) (22) Measure stochastic bellman gradient optimality function as in equation ( 20…”
Section: Experimental Evaluation and Resultsmentioning
confidence: 99%
“…To ensure service quality, another method called Queuing theorybased cuckoo search was developed in [11]. e fog computing environment's challenges and outstanding issues based on reinforcement learning were discussed in [12].…”
Section: Work That Are Relatedmentioning
confidence: 99%
“…Second, a two-sided matching is examined in many scenarios as the load balancing, energy consumption criteria are considered in HNs to construct the preference lists. Third, using reinforcement learning algorithms [45] is potential to investigate matching problems where one side may not know their player preference relations a priori. They can only construct their preference list by interacting with players of other side [45].…”
Section: Discussionmentioning
confidence: 99%
“…Third, using reinforcement learning algorithms [45] is potential to investigate matching problems where one side may not know their player preference relations a priori. They can only construct their preference list by interacting with players of other side [45].…”
Section: Discussionmentioning
confidence: 99%
“…AI and ML tools provide efficient techniques to analyze and predict the statues of system accurately. Reinforcement learning is a such kind of techniques [82], [83], which can help to build PLs efficiently through online learning mechanism (i.e., exploitation and exploration). Thus, using these in the context of computational offloading enable the system to make dynamic and efficient offloading decisions.…”
Section: F Application Of Ai and Ml-based Techniquesmentioning
confidence: 99%