2022
DOI: 10.3390/jlpea12040053
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Objective Resource Scheduling for IoT Systems Using Reinforcement Learning

Abstract: IoT embedded systems have multiple objectives that need to be maximized simultaneously. These objectives conflict with each other due to limited resources and tradeoffs that need to be made. This requires multi-objective optimization (MOO) and multiple Pareto-optimal solutions are possible. In such a case, tradeoffs are made w.r.t. a user-defined preference. This work presents a general Multi-objective Reinforcement Learning (MORL) framework for MOO of IoT embedded systems. This framework comprises a general M… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 63 publications
0
4
0
Order By: Relevance
“…MC involves randomness in its estimations [79][80][81], while RL continuously improves effectiveness through experimentation and learning [42,82]. Larger sets of proposed felling trees provide more samples for training and enhancing experimental results in MC and RL [38,81,83,84]. PSO is more suitable for optimizing a relatively small number of trees chosen based on QVM or VMM, helping avoid local optimization [85,86].…”
Section: Influence Of Proposed Selective Felling Tree Determination M...mentioning
confidence: 99%
See 1 more Smart Citation
“…MC involves randomness in its estimations [79][80][81], while RL continuously improves effectiveness through experimentation and learning [42,82]. Larger sets of proposed felling trees provide more samples for training and enhancing experimental results in MC and RL [38,81,83,84]. PSO is more suitable for optimizing a relatively small number of trees chosen based on QVM or VMM, helping avoid local optimization [85,86].…”
Section: Influence Of Proposed Selective Felling Tree Determination M...mentioning
confidence: 99%
“…Larger sets of proposed felling trees inherently offer more training samples, thus potentially enhancing the experimental outcomes in both MC and RL [38,81,83]. On the other hand, PSO is better suited for optimizing a relatively modest number of trees chosen using QVM or VMM, which aids in avoiding issues related to local optimization [89,90].…”
Section: Influence Of Proposed Selective Felling Tree Determination M...mentioning
confidence: 99%
“…In order to evaluate the efficacy of the proposed framework, Shresthamali, S. et al [32] focused on designs to simulate both single-task and dual-task systems. The results show that their Multi-Objective Reinforcement Learning (MORL) algorithms can learn superior policies while incurring lower learning costs and effectively balancing competing goals during execution.…”
Section: Background Studymentioning
confidence: 99%
“…14 Shresthamali, S. et al [32] This paper focuses on simulating both single-task and dual-task systems in order to assess the effectiveness of the proposed framework. The outcomes prove that their Multi-Objective Reinforcement Learning (MORL) algorithms are able to learn superior policies with reduced learning costs and effectively balance competing goals during execution.…”
Section: Background Studymentioning
confidence: 99%