2015
DOI: 10.1109/jsac.2015.2478717
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Duty Cycling in Sensor Networks With Energy Harvesting Using Continuous-Time Markov Chain and Fluid Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
12
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 34 publications
(13 citation statements)
references
References 37 publications
1
12
0
Order By: Relevance
“…More recently, reinforcement learning (RL) algorithms have been proposed as a more flexible and powerful approach to achieving node level energy neutrality in energy harvesting sensor networks. In Chan et al (2015) the challenge of maximising quality of service while maintaining battery reserves is framed as a continuous time markov decision process, and solved using tabular Q-learning. Hsu, Liu, & Wang (2014) propose a tabular Q-learning approach for query driven wireless sensor networks.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…More recently, reinforcement learning (RL) algorithms have been proposed as a more flexible and powerful approach to achieving node level energy neutrality in energy harvesting sensor networks. In Chan et al (2015) the challenge of maximising quality of service while maintaining battery reserves is framed as a continuous time markov decision process, and solved using tabular Q-learning. Hsu, Liu, & Wang (2014) propose a tabular Q-learning approach for query driven wireless sensor networks.…”
Section: Related Workmentioning
confidence: 99%
“…If 0 % duty cycle periods occur simultaneously at all nodes in the network, any events occurring in this period will go undetected. Tabular reinforcement learning algorithms as proposed in (Chan et al, 2015), (Hsu et al, 2014), (Shresthamali et al, 2017) are likely to be intractable when considering the entire network, due to the size of the state-action space, and so more powerful reinforcement learning approaches are required, which utilise function approximators instead of lookup tables, as in (Mnih et al, 2015), (Mnih et al, 2016), (Peters & Schaal, 2008). Deep neural network based approaches have also been proposed for civil and structural monitoring problems recently, outside the context of reinforcement learning, as in Y.…”
Section: Related Workmentioning
confidence: 99%
“…Adaptive duty cycling schemes focus more on achieving energy efficiency while fulfilling some QoS parameters such as throughput and delay [25], [26]. Achieving a low duty cycle results in high energy saving and in most cases, it leads to increased delay [4].…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, researchers have made great effort to reduce energy consumption and to prolong network lifetime in various application fields [1][2][3][4][5][6][7]. Recently, energy harvesting technologies have been proposed [8] and various problems in energy harvesting sensor networks have been studied [9][10][11][12][13][14][15][16][17][18][19][20][21][22].…”
Section: Introductionmentioning
confidence: 99%
“…The bitreversal permutation sequence can also be used to achieve this aim [17]. In [18], a new framework was developed to 2 International Journal of Distributed Sensor Networks model the adaptive duty cycling in energy harvesting sensor networks as a Markov Decision Process. To achieve optimal performance, an optimal scheduling algorithm was proposed in [19] which considered the residual energy to achieve closeto-optimal utility.…”
Section: Introductionmentioning
confidence: 99%