2014 23rd International Conference on Computer Communication and Networks (ICCCN) 2014
DOI: 10.1109/icccn.2014.6911786
|View full text |Cite
|
Sign up to set email alerts
|

Fast Markov Decision Process for data collection in sensor networks

Abstract: We investigate the data collection problem in sensor networks. The network consists of a number of stationary sensors deployed at different sites for sensing and storing data locally. A mobile element moves from sites to sites to collect data from the sensors periodically. There are different costs associated with the mobile element moving from one site to another, and different rewards for obtaining data at different sensors. Furthermore, the costs and the rewards are assumed to change abruptly. The goal is t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2015
2015
2015
2015

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 15 publications
0
1
0
Order By: Relevance
“…For queuing systems, we can find fast queuing policies to achieve a desired stationary distribution by minimizing the SLEM via convex relaxation [32]. In [33], fast policies of collecting data in sensor networks with mobile elements are obtained by including mixing time or SLEM as a trade-off term in the objective function of the formulated optimization problem. In [34], the mixing time is considered as a regularization term to accelerate the learning phase of reinforcement learning.…”
Section: Related Workmentioning
confidence: 99%
“…For queuing systems, we can find fast queuing policies to achieve a desired stationary distribution by minimizing the SLEM via convex relaxation [32]. In [33], fast policies of collecting data in sensor networks with mobile elements are obtained by including mixing time or SLEM as a trade-off term in the objective function of the formulated optimization problem. In [34], the mixing time is considered as a regularization term to accelerate the learning phase of reinforcement learning.…”
Section: Related Workmentioning
confidence: 99%