2017 IEEE International Conference on Robotics and Automation (ICRA) 2017
DOI: 10.1109/icra.2017.7989080
|View full text |Cite
|
Sign up to set email alerts
|

T-LQG: Closed-loop belief space planning via trajectory-optimized LQG

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
21
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
3
3
1

Relationship

2
5

Authors

Journals

citations
Cited by 19 publications
(21 citation statements)
references
References 14 publications
0
21
0
Order By: Relevance
“…This problem utilizes the trace of the covariance as the optimization objective and is accompanied by a separate feedback design implemented in the execution of the policy. In a companion paper, we prove the near-optimality of this framework under a small-noise assumption [27], [28].…”
Section: Comparison Of Trajectory Planning Approachesmentioning
confidence: 97%
See 1 more Smart Citation
“…This problem utilizes the trace of the covariance as the optimization objective and is accompanied by a separate feedback design implemented in the execution of the policy. In a companion paper, we prove the near-optimality of this framework under a small-noise assumption [27], [28].…”
Section: Comparison Of Trajectory Planning Approachesmentioning
confidence: 97%
“…Problem 3: T-LQG Planning Problem [27] Solve for the optimal linearization trajectory of the LQG policy:…”
Section: Comparison Of Trajectory Planning Approachesmentioning
confidence: 99%
“…However, ignoring the effects of stochastic future observations can degrade the performance (van den Berg et al, 2012). Other methods (Rafieisakhaei et al, 2017; van den Berg et al, 2012) that do not rely on the MLO assumption are advantageous in that regard. In particular, belief iterative linear quadratic Gaussian (iLQG) (van den Berg et al, 2012) performs iterative local optimization in a Gaussian belief space by quadratically approximating the value function and linearizing the dynamics to obtain a time-varying affine feedback policy.…”
Section: Introductionmentioning
confidence: 99%
“…In particular POMDPs are notorious for their computational complexity that may prohibit their application for navigation in complex or uncertain environments in high dimensional state spaces. In [2] a more scalable LQG variant is proposed and applied to environments with discontinuous sensing regions. An approximate solution to POMDPs is given in [3] but with the use of considerable pre-processing.…”
Section: Introductionmentioning
confidence: 99%