2021
DOI: 10.1007/s10922-021-09603-x
|View full text |Cite
|
Sign up to set email alerts
|

Deep Reinforcement Learning Based Active Queue Management for IoT Networks

Abstract: Deep Reinforcement Learning based Active Queue Management for IoT Networks ©Minsu Kim, 2019 Master of Applied Science Computer Networks Ryerson UniversityInternet of Things (IoT) has pervaded most aspects of our life through the Fourth Industrial Revolution. It is expected that a typical family home could contain several hundreds of smart devices by 2022. Current network architecture has been moving to fog/edge architecture to have the capacity for IoT. However, in order to deal with the enormous amount of tra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 29 publications
(10 citation statements)
references
References 31 publications
0
10
0
Order By: Relevance
“…The problem of modeling, managing and optimizing resources in a heterogeneous communication network is a very challenging engineering problem because of its inherent complexity ( [1], [2], [31]- [33]). Indeed, one of the most difficult challenges to be addressed to apply optimization techniques to the management of such complex networks is to derive predictive models of the queues of the switch behaviour ( [34]- [39]). Numerous studies have also been conducted to maximize the performance of the controller and OpenFlow switch of SDNs.…”
Section: Related Workmentioning
confidence: 99%
“…The problem of modeling, managing and optimizing resources in a heterogeneous communication network is a very challenging engineering problem because of its inherent complexity ( [1], [2], [31]- [33]). Indeed, one of the most difficult challenges to be addressed to apply optimization techniques to the management of such complex networks is to derive predictive models of the queues of the switch behaviour ( [34]- [39]). Numerous studies have also been conducted to maximize the performance of the controller and OpenFlow switch of SDNs.…”
Section: Related Workmentioning
confidence: 99%
“…Based on the idea of the ARED algorithm, the target queue length, target, is set with a value range of [min th +0.4 (max th − min th ), min th +0.6 (max th − min th )], and the relationship between target and Qavg is used to adaptively adjust maxp [21]. If Qavg is in the vicinity of min th and Qavg < target, the congestion adjustment is too active, and hence, the value of maxp must be decreased; if Qavg is in the vicinity of max th and Qavg > target, the congestion adjustment is too conservative, and the value of maxp must be increased [22]. Let maxp + and maxp − represent the maximum drop probabilities obtained after a radical increase and a conservative decrease, respectively, which are expressed as follows:…”
Section: ) Packet Drop Probability Based On the Cubic Functionmentioning
confidence: 99%
“…The goal of MPC is to minimize the cost function (23). In order to find the optimal control input, equation ( 23) can be written as:…”
Section: B Controller Designmentioning
confidence: 99%
“…[22] presented a solution that tackles the challenges of tuning the active queue management parameters for inter-domain congestion control scenarios by using an artificial neural network. [23] adopted deep reinforcement learning technique to achieve the trade-off between queuing delay and throughput. However, most of the methods above rely on the network system model, and require a large amount of computation and storage space.…”
mentioning
confidence: 99%