ICC 2019 - 2019 IEEE International Conference on Communications (ICC) 2019
DOI: 10.1109/icc.2019.8761183
|View full text |Cite
|
Sign up to set email alerts
|

DQ Scheduler: Deep Reinforcement Learning Based Controller Synchronization in Distributed SDN

Abstract: In distributed software-defined networks (SDN), multiple physical SDN controllers, each managing a network domain, are implemented to balance centralized control, scalability and reliability requirements. In such networking paradigm, controllers synchronize with each other to maintain a logically centralized network view. Despite various proposals of distributed SDN controller architectures, most existing works only assume that such logically centralized network view can be achieved with some synchronization d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
31
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 27 publications
(32 citation statements)
references
References 22 publications
1
31
0
Order By: Relevance
“…Similarly, online convex optimization is used for cloud and IoT resource orchestration [20], [21], but requires convex functions; a condition not satisfied here. Another approach is reinforcement learning (RL), used in spectrum management [14], network diagnostics [23], interference coordination [24], and SDN control [25], among others. However, RL suffers from the curse of dimensionality, and lacks convergence guarantees.…”
Section: Related Workmentioning
confidence: 99%
“…Similarly, online convex optimization is used for cloud and IoT resource orchestration [20], [21], but requires convex functions; a condition not satisfied here. Another approach is reinforcement learning (RL), used in spectrum management [14], network diagnostics [23], interference coordination [24], and SDN control [25], among others. However, RL suffers from the curse of dimensionality, and lacks convergence guarantees.…”
Section: Related Workmentioning
confidence: 99%
“…MIND predicts the spatial-temporal traffic information or network conditions, where the policy generation module is designed by optimal routing policies that learn from data based on RL. In distributed SDN, the problem of controller synchronization as a Markov Decision Process (MDP) was investigated in [28] with a limited synchronization budget to determine the rules that support the benefits of controller synchronization over time. An RL-based algorithm uses the deep neural network (DNN) to represent its value function, called the Deep-Q (DQ) Scheduler, which provides nearly twofold performance improvement compared to the state-ofthe-art SDN controller synchronization solutions.…”
Section: Related Workmentioning
confidence: 99%
“…Experiments in dynamic environments such as vehicular networks are missing to prove its efficiency in such challenging conditions. Authors in [12] were interested in distributed SDN and dealt with the synchronization problem between controllers based on a Deep Reinforcement Learning (DRL) approach. They propose a routing-focused DQ (Deep-Q) Scheduler to get the best policy that optimizes controller synchronization scheme over a time period.…”
Section: Distributed Control Plane Related Workmentioning
confidence: 99%