2019 IEEE/CIC International Conference on Communications in China (ICCC) 2019
DOI: 10.1109/iccchina.2019.8855817
|View full text |Cite
|
Sign up to set email alerts
|

Reinforcement Learning Based Matching for Computation Offloading in D2D Communications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 11 publications
0
2
0
Order By: Relevance
“…For a given workload, to utilize only one or sever servers at the full state and turn off the other servers may result in low energy consumption, but on the other hand contribute to the high delay. Therefore, how to allocate the computation resources to balance energy consumption and service quality is an important direction in the research [149], [241], [242].…”
Section: A Power Consumption Modelingmentioning
confidence: 99%
“…For a given workload, to utilize only one or sever servers at the full state and turn off the other servers may result in low energy consumption, but on the other hand contribute to the high delay. Therefore, how to allocate the computation resources to balance energy consumption and service quality is an important direction in the research [149], [241], [242].…”
Section: A Power Consumption Modelingmentioning
confidence: 99%
“…It is difficult to obtain large amounts of data to train ML models. Other studies believe that Reinforcement Learning (RL) is a potential solution to task offloading [18][19][20]. RL-based algorithms enable vehicles to learn an optimal offloading strategy by interacting with the environment.…”
Section: Introductionmentioning
confidence: 99%