2019 IEEE 27th International Conference on Network Protocols (ICNP) 2019
DOI: 10.1109/icnp.2019.8888034
|View full text |Cite
|
Sign up to set email alerts
|

MACS: Deep Reinforcement Learning based SDN Controller Synchronization Policy Design

Abstract: In distributed software-defined networks (SDN), multiple physical SDN controllers, each managing a network domain, are implemented to balance centralised control, scalability, and reliability requirements. In such networking paradigms, controllers synchronize with each other, in attempts to maintain a logically centralised network view. Despite the presence of various design proposals for distributed SDN controller architectures, most existing works only aim at eliminating anomalies arising from the inconsiste… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
22
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 19 publications
(23 citation statements)
references
References 38 publications
1
22
0
Order By: Relevance
“…They prove that it outperforms anti-entropy algorithm. They extended their work by proposing a policy design for Controller Synchronization based on DRL too [34]. They proved the efficiency of their DRL policy in maximizing the performance enhancements brought by controller synchronizations in terms of delay over a period of time.…”
Section: Distributed Control Plane Related Workmentioning
confidence: 98%
“…They prove that it outperforms anti-entropy algorithm. They extended their work by proposing a policy design for Controller Synchronization based on DRL too [34]. They proved the efficiency of their DRL policy in maximizing the performance enhancements brought by controller synchronizations in terms of delay over a period of time.…”
Section: Distributed Control Plane Related Workmentioning
confidence: 98%
“…An RL-based algorithm uses the deep neural network (DNN) to represent its value function, called the Deep-Q (DQ) Scheduler, which provides nearly twofold performance improvement compared to the state-ofthe-art SDN controller synchronization solutions. However, some authors [29] use the RL for autonomous cyber defense in SDN and also use RL to resolve the synchronization issues of multiple controllers [30], [31]. Several AI techniques used in the SDN context have been introduced in [32], including different security and placement issues.…”
Section: Related Workmentioning
confidence: 99%
“…The DAIS ITA team has addressed these challenges. For example, the fundamental understanding of controller synchronization has been studied in [1], while techniques for controller synchronization has been developed in [2], [3] and [4]. A hybrid control architecture for SDN and ad-hoc networks to combine the advantages of central and distributed control mechanisms is proposed in [5] and efficient techniques for sharing of coalition resources across domains in SDC are investigated in [6] and [7].…”
Section: Software Defined Coalitions (Sdc)mentioning
confidence: 99%
“…It is important to ensure the dynamic configuration and re-configuration of resources and services can be carried out efficiently in presence of possible domain fragmentation and re-joining. Since RL techniques (e.g., [3,4]) have been commonly used to control infrastructures, it is desirable to understand and improve operations of such learning techniques when the SDC can change suddenly from being connected to fragmented, and vice versa. In particular, as SDC fragmentation and re-joining of domains represent sudden changes of operating environment, the learning-based control algorithms in use are expected to quickly adapt to such rapid changes while ensuring satisfactory performance and robustness.…”
Section: Sdc Fragmentationmentioning
confidence: 99%