2018
DOI: 10.1109/tgcn.2018.2801725
|View full text |Cite
|
Sign up to set email alerts
|

RLMan: An Energy Manager Based on Reinforcement Learning for Energy Harvesting Wireless Sensor Networks

Abstract: A promising solution to achieve autonomous wireless sensor networks is to enable each node to harvest energy in its environment. To address the time-varying behavior of energy sources, each node embeds an energy manager responsible for dynamically adapting the power consumption of the node in order to maximize the quality of service while avoiding power failures. A novel energy management algorithm based on reinforcement learning, named RLMan, is proposed in this work. By continuously exploring the environment… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
62
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 104 publications
(72 citation statements)
references
References 16 publications
1
62
0
Order By: Relevance
“…They simulated a communication environment and approximated the state space with discrete features that indicate battery level and its constraints, harvested energy over an hour, characteristics of the communication link, data arrival process, and the data buffers at the communicating nodes. Similarly, Aoudia et al [6], used an actor-critic method with linear function approximation to learn approximation for both policy and value function. They used a Gaussian policy to generate continuous values of bounded packet rates and summarized the state space by continuous values of the current residual energy.…”
Section: Related Workmentioning
confidence: 99%
“…They simulated a communication environment and approximated the state space with discrete features that indicate battery level and its constraints, harvested energy over an hour, characteristics of the communication link, data arrival process, and the data buffers at the communicating nodes. Similarly, Aoudia et al [6], used an actor-critic method with linear function approximation to learn approximation for both policy and value function. They used a Gaussian policy to generate continuous values of bounded packet rates and summarized the state space by continuous values of the current residual energy.…”
Section: Related Workmentioning
confidence: 99%
“…In order to enable sustainable operation of IoT nodes, energy harvesting technologies have been considered for a long time by improving both hardware components and platforms and the associated software methods to properly manage the energy consumption [4,8,9]. These software components are often referred to as energy managers and are not addressed in this study.…”
Section: Related Work On Energy Harvesting For Long-range Platformsmentioning
confidence: 99%
“…This parameter is set to 2 days per week without any illuminance. Moreover, by applying (8), the supercapacitor needs to be set to 148.87 mF in order to absorb the current peaks of the transmission process (40.9 mA for of 14 dBm). Results show that increasing from 1 mn to 60 mn induces a battery reduction factor of 41 for SL setup, of 4.5 for SD setup and 1.3 for SH setup.…”
Section: Energy Consumption Evaluationmentioning
confidence: 99%
“…Khelladi et al [5] proposed multi-node charging where the numbers of sensors charged at every stop for energy transfer. Aoudia et al [6] proposed a novel RL man algorithm based upon reinforcement learning for the energy conservation of the sensor nodes. This method gained almost 70% of the average packet rate.…”
Section: Charging Strategiesmentioning
confidence: 99%