2019
DOI: 10.1177/0278364919875199
|View full text |Cite
|
Sign up to set email alerts
|

General-purpose incremental covariance update and efficient belief space planning via a factor-graph propagation action tree

Abstract: Fast covariance calculation is required both for SLAM (e.g. in order to solve data association) and for evaluating the information-theoretic term for different candidate actions in belief space planning (BSP). In this paper we make two primary contributions. First, we develop a novel general-purpose incremental covariance update technique, which efficiently recovers specific covariance entries after any change in the inference problem, such as introduction of new observations/variables or re-linearization of t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 26 publications
0
5
0
Order By: Relevance
“…Finally, it is also worth noting that autonomous mobile robot exploration is related to a more broadly-focused body of works that study belief space planning (BSP) in unknown environments. Such works have succeeded in relaxing assumptions often imposed on exploration, such as that of maximum likelihood observations [9], and in achieving more efficient belief propagation and covariance recovery [29]. However, such techniques have thus far been implemented in simple, obstacle-free domains populated with point landmarks.…”
Section: Related Workmentioning
confidence: 99%
“…Finally, it is also worth noting that autonomous mobile robot exploration is related to a more broadly-focused body of works that study belief space planning (BSP) in unknown environments. Such works have succeeded in relaxing assumptions often imposed on exploration, such as that of maximum likelihood observations [9], and in achieving more efficient belief propagation and covariance recovery [29]. However, such techniques have thus far been implemented in simple, obstacle-free domains populated with point landmarks.…”
Section: Related Workmentioning
confidence: 99%
“…The major difficulty is that all future measurement sequences have to be rollout in order to determine an optimal policy. A large body of literature [10]- [12] simplifies the problem with maximum likelihood measurement assumption, where only the maximum likelihood measurement sequence is evaluated. Du Toit [13] shows that the assumption introduces artificial information, which makes the resulting control policy less robust if the actual future measurements differ from the ones with maximum likelihood.…”
Section: Related Workmentioning
confidence: 99%
“…We also compare the proposed method with methods that assume maximum likelihood measurements. The assumption is equivalent to assuming w t+1 = 0 in (12). We refer to this method as M-iLQG in the following.…”
Section: B Real World Environmentsmentioning
confidence: 99%
“…BSP can be seen as a joint control and estimation problem in which an agent (robot) has to find an optimal control according to a specific task-related objective, which itself has to be estimated while accounting for different sources of uncertainty, for example, due to stochastic sensing, motion or environment. Some interesting instantiations of this problem are active SLAM (e.g., Atanasov et al, 2015; Du et al, 2011; Huang et al, 2005; Indelman et al, 2015; Kim and Eustice, 2014; Kopitkov and Indelman, 2017, 2019; Regev and Indelman, 2017; Stachniss et al, 2004; Valencia et al, 2012), active perception (Bajcsy 1988), sensor deployment and measurement selection (Joshi and Boyd 2009; Bian et al 2006; Hovland and McCarragher 1997), graph reduction, and pruning (Kretzschmar and Stachniss 2012; Carlevaris-Bianco et al 2014).…”
Section: Introductionmentioning
confidence: 99%
“…In Indelman et al (2015) this challenge was addressed by resorting to the information form and utilizing sparsity; however, calculations still involve expensive access to marginal probability distributions. The rAMDL approach (Kopitkov and Indelman 2017, 2019) performs a one-time calculation of marginal covariance of variables involved in the candidate actions, and then applies an augmented matrix determinant lemma (AMDL) to efficiently evaluate the information-theoretic cost for each candidate action. Nevertheless, that approach still requires recovery of appropriate marginal covariances, the complexity of which depends on state-dimensionality and system sparsity.…”
Section: Introductionmentioning
confidence: 99%