2017
DOI: 10.1145/3126495
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Power Management in Solar Energy Harvesting Sensor Node Using Reinforcement Learning

Abstract: In this paper, we present an adaptive power manager for solar energy harvesting sensor nodes. We use a simplified model consisting of a solar panel, an ideal battery and a general sensor node with variable duty cycle. Our power manager uses Reinforcement Learning (RL), specifically SARSA(λ) learning, to train itself from historical data. Once trained, we show that our power manager is capable of adapting to changes in weather, climate, device parameters and battery degradation while ensuring near-optimal perfo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
47
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 60 publications
(47 citation statements)
references
References 19 publications
0
47
0
Order By: Relevance
“…Shaswot et al [45] use solar energy harvesting sensor nodes powered by a battery to teach an RL system to get energy-neutral operations. They use SARSA algorithm [47] to study the impact of weather, battery degradation and changes to hardware.…”
Section: Related Workmentioning
confidence: 99%
“…Shaswot et al [45] use solar energy harvesting sensor nodes powered by a battery to teach an RL system to get energy-neutral operations. They use SARSA algorithm [47] to study the impact of weather, battery degradation and changes to hardware.…”
Section: Related Workmentioning
confidence: 99%
“…Paper [144], [143] and [145] considered the power management policy at the sensor nodes, i.e., how to schedule the energy for sensing, transmission and sleeping. Hsu et al [144] proposed and implemented a fuzzy Q-learning algorithm, while Shresthamali et al [143] implemented the Qlearning algorithm. Without real-world experiments, Aoudia et al [145] modeled the problem as a discrete MDP and proposed an actor-critic learning algorithm.…”
Section: A Reinforcement Learning Based Communication Optimization Imentioning
confidence: 99%
“…Shresthamali et al [2] used a SARSA(λ) RL algorithm to develop adaptive power management for a solar-energy harvesting sensor node. To simulate a sensor node, they used a scaled-up version of a real sensor powered by a battery and a solar panel, and used solar radiation data to calculate hourly harvested energy.…”
Section: Related Workmentioning
confidence: 99%
“…To get a baseline for the performance and suitability, we first apply PPO in the same setting as Shresthamali et al [2], who used the SARSA(λ) algorithm and designed a reward function based on the distance from energy neutrality. The concept of energy-neutrality was introduced by Kansal et al [18], which states that a node is in energy-neutral operation if the consumed energy is less than or equal to the harvested energy.…”
Section: Reward Function Based On Energy Neutralitymentioning
confidence: 99%
See 1 more Smart Citation