2018 IEEE 38th International Conference on Distributed Computing Systems (ICDCS) 2018
DOI: 10.1109/icdcs.2018.00159
|View full text |Cite
|
Sign up to set email alerts
|

Q-Placement: Reinforcement-Learning-Based Service Placement in Software-Defined Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
17
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 29 publications
(17 citation statements)
references
References 15 publications
0
17
0
Order By: Relevance
“…3) Reinforcement Learning in SDN: Some recent highprofile successes [25], [45] attract enormous interests in using RL techniques to solve complicated decision-making problems. In the context of SDN, the authors in [46] apply RL-based algorithms to solve service placement problem on SDN switches. A routing-focused controller synchronization scheme is developed using DRL-based approaches in [47], where the MDP the authors formulated is easier to solve, as they assume an uniform synchronization budget and coarsegrained synchronization decisions where the decisions are at the level of controller pairs.…”
Section: Evaluation Resultsmentioning
confidence: 99%
“…3) Reinforcement Learning in SDN: Some recent highprofile successes [25], [45] attract enormous interests in using RL techniques to solve complicated decision-making problems. In the context of SDN, the authors in [46] apply RL-based algorithms to solve service placement problem on SDN switches. A routing-focused controller synchronization scheme is developed using DRL-based approaches in [47], where the MDP the authors formulated is easier to solve, as they assume an uniform synchronization budget and coarsegrained synchronization decisions where the decisions are at the level of controller pairs.…”
Section: Evaluation Resultsmentioning
confidence: 99%
“…Since a controller has the global view of the network, it can observe the reward and next state for current state and current action. This advantage can make the proposed Q-learning with guarantees both performance and convergence rates [16]. As a result, SDN Fog controller can find optimal actions which help nodes to optimally select a neighboring node to which it can offload its requested tasks.…”
Section: A Contributionsmentioning
confidence: 99%
“…Towards this end, the pro-posed model merges with SDN-based Fog computing where SDN Fog controller directly controls, programs, orchestrates and manages network resources. Further, SDN Fog nodes serve the end users' request and deliver information based on collected traffic information to the controller [16]. Besides, the proposed reward function is defined with the aim to minimize the processing time and the overall overloading probability.…”
Section: A Contributionsmentioning
confidence: 99%
See 2 more Smart Citations