49th IEEE Conference on Decision and Control (CDC) 2010
DOI: 10.1109/cdc.2010.5717331
|View full text |Cite
|
Sign up to set email alerts
|

Explorative navigation of mobile sensor networks using sparse Gaussian processes

Abstract: Abstract-This paper presents an explorative navigation method using sparse Gaussian processes for mobile sensor networks. We first show that a near-optimal approximation is possible with a subset of measurements if we select the subset carefully, i.e., if the correlation between the selected measurements and the remaining measurements is small and the correlation between the prediction locations and the remaining measurements is small. An estimation method based on a subset of measurements is desirable for mob… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2011
2011
2018
2018

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 14 publications
0
6
0
Order By: Relevance
“…The highest-scored sampling unit is selected for inclusion into U and removed from D. This greedy selection procedure is iterated until U reaches a predefined size. Among the various criteria introduced earlier, the differential entropy score [24] is reported to perform well [26]; it is a monotonic function of the posterior variance Σ ss|U (6), thus resulting in the greedy selection of a sampling unit s ∈ D \ U with the largest variance in each iteration.…”
Section: A Subset Of Data Approximationmentioning
confidence: 99%
See 1 more Smart Citation
“…The highest-scored sampling unit is selected for inclusion into U and removed from D. This greedy selection procedure is iterated until U reaches a predefined size. Among the various criteria introduced earlier, the differential entropy score [24] is reported to perform well [26]; it is a monotonic function of the posterior variance Σ ss|U (6), thus resulting in the greedy selection of a sampling unit s ∈ D \ U with the largest variance in each iteration.…”
Section: A Subset Of Data Approximationmentioning
confidence: 99%
“…The first two equalities expand the first component using the definition of Λ (Theorem 2B), (9), (25), (26), and (27). The last two equalities exploit (9) and (11).…”
Section: Appendix B Proof Of Theorem 2bmentioning
confidence: 99%
“…In (Nguyen et al, 2016), a sampling strategy based on entropy maximization is designed to find the most informative locations for mobile robotic wireless sensor networks (MRWSs) over a spatial field modeled with a Gaussian process. In (Oh et al, 2010), they first use sparse Gaussian processes to reduce the computational cost on each sensor before the environment state is predicted. Then, to define an exploratory rule towards the more informative locations, they use and compare three informative strategies: informative vector machine (IVM), principal feature analysis (PFA), and a mutual information based measurement selection algorithm (MI).…”
Section: Related Work and Contributionmentioning
confidence: 99%
“…Regression analysis for Gaussian processes requires growing computational complexity since the size of the covariance matrix increases as the number of observations increases. This problem in the context of the mobile sensor networks has been tackled in different directions [15], [16]. Computational complexity of a full Bayesian prediction algorithm for spatio-temporal Gaussian processes with unknown covariance functions grows in a prohibitively fast rate as the observation number increases due to the MCMC method.…”
Section: Introductionmentioning
confidence: 99%