2019
DOI: 10.1109/tccn.2019.2936193
|View full text |Cite
|
Sign up to set email alerts
|

Deep Reinforcement Learning for Adaptive Caching in Hierarchical Content Delivery Networks

Abstract: Caching is envisioned to play a critical role in nextgeneration content delivery infrastructure, cellular networks, and Internet architectures. By smartly storing the most popular contents at the storage-enabled network entities during off-peak demand instances, caching can benefit both network infrastructure as well as end users, during on-peak periods. In this context, distributing the limited storage capacity across network entities calls for decentralized caching schemes. Many practical caching systems inv… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
69
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 116 publications
(69 citation statements)
references
References 49 publications
0
69
0
Order By: Relevance
“…However, [54] is mostly confined to centralized learning within one base station as a single agent which is not scalable in mobile edge-cloud scenarios [56]. Authors in [55] propose a deep reinforcement learning-based caching in hierarchical content delivery networks. The proposed framework DQNCache relies on Deep Q Networks to learn optimal caching policy in an online manner.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…However, [54] is mostly confined to centralized learning within one base station as a single agent which is not scalable in mobile edge-cloud scenarios [56]. Authors in [55] propose a deep reinforcement learning-based caching in hierarchical content delivery networks. The proposed framework DQNCache relies on Deep Q Networks to learn optimal caching policy in an online manner.…”
Section: Related Workmentioning
confidence: 99%
“…We model mobile edge-cloud network environment with a novel state space and set of actions that edges could take to collaborate with other nodes, interact with the environment in real time to maximise a cumulative reward. Traditional single-agent DRL-based caching approaches [54,55] have been proposed where a single edge (e.g. single base-station or an access-point) learns to make most suitable caching decisions based on the states of the environment and the rewards.…”
Section: Introductionmentioning
confidence: 99%
“…This approach increases the long term reward of the system and hence improves the performance. Deep Reinforcement Learning approach intelligently perceive the environment and automatically learns about the caching policy according to the history [68,69]. Emergence of deep neural network has made it feasible to automatically learn from raw and possibly high-dimensional data.…”
Section: Recommendation Via Q Learningmentioning
confidence: 99%
“…Deep neural networks (DNNs) can address the curse of dimensionality in the highdimensional and continuous state space by providing compact low-dimensional representations of high-dimensional inputs [19]. Wedding deep learning with RL (using a DNN to approximate the action-value function), deep (D) RL has offered artificial agents with human-level performance across diverse application domains [18], [20]. (D)RL algorithms have also shown great potential in several challenging power systems control and monitoring tasks [21], [22], [6], [23], [24], [25], and load control [26], [27].…”
Section: Introductionmentioning
confidence: 99%
“…Form the mini-batch loss L Tar (θ θ θ τ ; M τ ) using (19). 12: Update θ θ θ τ +1 using (20). 13: if mod(τ, B) = 0 then…”
mentioning
confidence: 99%