2019
DOI: 10.1109/tgcn.2019.2924242
|View full text |Cite
|
Sign up to set email alerts
|

Delay-Optimal Resource Scheduling of Energy Harvesting-Based Devices

Abstract: This paper investigates resource scheduling in a wireless communication system operating with Energy Harvesting (EH) based devices and perfect Channel State Information (CSI). The aim is to minimize the packet loss that occurs when the buffer is overflowed or when the queued packet is older than a certain pre-defined threshold. We so consider a strict delay constraint rather than an average delay constraint. The associated optimization problem is modeled as Markov Decision Process (MDP) where the actions are t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 30 publications
0
2
0
Order By: Relevance
“…However, efficient scheduling of arrived tasks with different priorities and task deadlines is also crucial. The authors in [9] consider a strict delay constraint for energy harvesting devices to find the best number of packets for transmission by modeling the problem as a Markov decision process (MDP). Solving the MDP through value iteration requires iterating over all states.…”
Section: Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…However, efficient scheduling of arrived tasks with different priorities and task deadlines is also crucial. The authors in [9] consider a strict delay constraint for energy harvesting devices to find the best number of packets for transmission by modeling the problem as a Markov decision process (MDP). Solving the MDP through value iteration requires iterating over all states.…”
Section: Literature Reviewmentioning
confidence: 99%
“…The value V θ is also used in order to estimate nodes' values. In this regard, we consider a linear combination of the super states' estimated value, calculated as stated in (9), and the neural network's output V θ to update nodes' values. In this way, as shown in Section VI, using VP-NN improves computation and enhances performance in two ways: first, improving the policy and estimated values in the tree; second, reducing both the depth and breadth of the search tree.…”
Section: A Ss-mcts With Neural Networkmentioning
confidence: 99%