2023
DOI: 10.1109/jiot.2022.3209987
|View full text |Cite
|
Sign up to set email alerts
|

Multitask Multiobjective Deep Reinforcement Learning-Based Computation Offloading Method for Industrial Internet of Things

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 24 publications
(2 citation statements)
references
References 33 publications
0
2
0
Order By: Relevance
“…Throughout the experiments, we assume that UEs are randomly distributed within an area of 350m × 350m. Additionally, the reference distance, channel bandwidths between the UE and the ENs, and the transmit power from UEs to ENs are 50 m, 6 MHz, and 25 dBm, respectively [31].…”
Section: A Simulation Settingsmentioning
confidence: 99%
“…Throughout the experiments, we assume that UEs are randomly distributed within an area of 350m × 350m. Additionally, the reference distance, channel bandwidths between the UE and the ENs, and the transmit power from UEs to ENs are 50 m, 6 MHz, and 25 dBm, respectively [31].…”
Section: A Simulation Settingsmentioning
confidence: 99%
“…Besides the above works, some works also define special optimization objectives for task-resource scheduling. For example, [28] formulates a multi-objective problem to minimize latency and energy consumption simultaneously, and employs MADRL to make an optimal offloading decision for cloudedge-end computing. [29] proposes an end-to-end DRL algorithm to maximize the number of tasks before their respective deadlines and minimize energy consumption simultaneously.…”
Section: Cost Minimizationmentioning
confidence: 99%