2021
DOI: 10.1109/jsac.2021.3078493
|View full text |Cite
|
Sign up to set email alerts
|

Cache-Enabled Multicast Content Pushing With Structured Deep Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 17 publications
(5 citation statements)
references
References 32 publications
0
5
0
Order By: Relevance
“…Strategies based on reinforcement learning and deep learning use observable user data or environmental states, such as user contextual information, channel gain or cache state, for online caching decisions and resource allocation. Wireless channels have a finite amount of data that can be transmitted per unit time, and proactive caching strategies are investigated in order to maximise bandwidth utilisation [21]- [24]. However, when the user request or environment state space is large, centralised Reinforcement Learning caching strategies are complex and difficult to handle, hence distributed reinforcement learning approaches are proposed [25].…”
Section: B Joint Caching and Resource Optimizationmentioning
confidence: 99%
See 1 more Smart Citation
“…Strategies based on reinforcement learning and deep learning use observable user data or environmental states, such as user contextual information, channel gain or cache state, for online caching decisions and resource allocation. Wireless channels have a finite amount of data that can be transmitted per unit time, and proactive caching strategies are investigated in order to maximise bandwidth utilisation [21]- [24]. However, when the user request or environment state space is large, centralised Reinforcement Learning caching strategies are complex and difficult to handle, hence distributed reinforcement learning approaches are proposed [25].…”
Section: B Joint Caching and Resource Optimizationmentioning
confidence: 99%
“…Similar to the contributions in [14] and [15], we predict user preference popularity by analyzing historical request information. Similar to the contributions in [23], [24], [25], [26], [27], and [28], we utilize reinforcement learning for dynamic cache decision optimization. However, what sets our work apart from these contributions is that we employ predicted user preferences for D2D shared node selection.…”
Section: Our Contributionmentioning
confidence: 99%
“…Finally, the joint scheduling of memories and wireless links generalizes the concept of cross-layer design by involving both the communication and memory units. Deep learning and deep reinforcement learning are expected to play key roles in dealing with the dynamic nature of user requests and radio environments [33][34][35].…”
Section: Memory Costsmentioning
confidence: 99%
“…Joint proactive caching and caching designs that can improve system performance by proactively transferring and replacing cached content during low-traffic times to meet future user needs have been extensively studied [10]- [13]. Somuyiwa et al [10] proposes a threshold-based proactive caching strategy that requires causal knowledge of channel quality, content profile, and user access behavior.…”
Section: Introductionmentioning
confidence: 99%
“…However, these proactive caching schemes above do not consider the limitation of the wireless transmission channel between the server and the user. On the other hand, Chen et al [13] develops a multi-cast proactive content strategy based on structured deep learning, which considers the complex coupling of time-varying transmission capacity and proactive caching decisions between users, but only focuses on proactive caching at the user end, ignoring the its impact on the server side.…”
Section: Introductionmentioning
confidence: 99%