2013 IEEE International Symposium on Information Theory 2013
DOI: 10.1109/isit.2013.6620497
|View full text |Cite
|
Sign up to set email alerts
|

Low-complexity scheduling policies for energy harvesting communication networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
51
0

Year Published

2014
2014
2019
2019

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 25 publications
(51 citation statements)
references
References 5 publications
0
51
0
Order By: Relevance
“…The second class comprises online approaches [14] [15] [16]. Authors in [14] studied a multi-access wireless system with EH transmitters, and the access problem was modeled as a partially observable Markov decision process (POMDP).…”
Section: A Related Work and Motivationsmentioning
confidence: 99%
“…The second class comprises online approaches [14] [15] [16]. Authors in [14] studied a multi-access wireless system with EH transmitters, and the access problem was modeled as a partially observable Markov decision process (POMDP).…”
Section: A Related Work and Motivationsmentioning
confidence: 99%
“…The corresponding throughput maximization problems are formulated with partially observable Markov decision processes (POMDP) and cast into a restless multi-armed bandit. [14] and [15] show the optimality of a Round-Robin based myopic policy that schedules the K nodes with the largest beliefs to maximize the immediate reward under different system models. In [16], [17], under the infinite battery assumption, a uniformizing random ordered policy that selects the sensors based on a predefined random priority list and the outcome of transmissions in the previous time slot is shown to be asymptotically optimal in infinite horizon for a broad class of energy harvesting process.…”
Section: Related Workmentioning
confidence: 99%
“…Second, we assume that the statuses of batteries at sensor nodes are available at the fusion center, thus the optimization problem is actually a Markov decision process rather than a POMDP. Third, the optimality of the myopic policies in [14], [15], [23], [24] requires the energy harvesting processes to be uniform at sensors, while such assumption is not required for the optimality of our policy. Fourth, utilizing large deviation theory, we explicitly characterize the convergence rate of our policy.…”
Section: Related Workmentioning
confidence: 99%
“…In our work, the transmission parameters are obtained based on the users' statistical energy harvesting profiles and very few real-time message exchanges are required during data transmissions. We also notice that the authors of [22] studied online optimal scheduling for MAC, where the AP schedules a subset of the nodes over K orthogonal channels in each slot without the knowledge of the energy harvesting processes and users' battery states. They formulated the problem with a partially observable Markov decision process and showed that the myopic policy, equivalent to the round-robin policy, is optimal in two special cases.…”
Section: Introductionmentioning
confidence: 99%
“…Likewise, we study the optimal TDMA based transmission scheme for MAC based on the users' statistical energy harvesting information. In contrast to [22] [23], we assume that K users access the channel one by one in a roundrobin way. By convex optimization, we get a closed-form solution which indicates that equal-power TDMA is optimal for the infinite-capacity battery case and the optimal power is exactly equal to the average energy storage rate.…”
Section: Introductionmentioning
confidence: 99%