2021
DOI: 10.1109/tnsm.2021.3077249
|View full text |Cite
|
Sign up to set email alerts
|

Generative Adversarial Network-Based Transfer Reinforcement Learning for Routing With Prior Knowledge

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
18
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 20 publications
(18 citation statements)
references
References 51 publications
0
18
0
Order By: Relevance
“…Combining DGMs with DRL is proven to address this concern efficiently. For instance, Dong et al [7] present a GAN-empowered DRL approach for rapidly transferring routing knowledge from S to T . To do so, a generator G S is first trained to extract R S , the latent representation of S. Then, they train a DRL model, learning the routing strategies based on R S rather than S. If the state structure changes to T , another generator G T is used for generating R T .…”
Section: E Use Casesmentioning
confidence: 99%
“…Combining DGMs with DRL is proven to address this concern efficiently. For instance, Dong et al [7] present a GAN-empowered DRL approach for rapidly transferring routing knowledge from S to T . To do so, a generator G S is first trained to extract R S , the latent representation of S. Then, they train a DRL model, learning the routing strategies based on R S rather than S. If the state structure changes to T , another generator G T is used for generating R T .…”
Section: E Use Casesmentioning
confidence: 99%
“…For instance, as a network economic approach, the contract theory helps design fair contracts between clients and service providers regarding provided service and the reward, which is significant for incentivizing the participation and improving the operational quality [6]. For decision-making problems in wireless networks (e.g., resource allocation), DRL is regarded as one of the most efficient solutions [7], [8]. Using a Markov decision process, DRL models enable the agents to learn the optimal action under the given state by setting different rewards.…”
Section: B Major Issues In Wireless Network Managementmentioning
confidence: 99%
“…The structure of network states, i.e., the number and connectivity of devices, changes accordingly. Traditional approaches, e.g., DRL, cannot well fit such mobility [7]. Recall that DRL models are trained to select the optimal action in the given state.…”
Section: B Major Issues In Wireless Network Managementmentioning
confidence: 99%
See 2 more Smart Citations