2019
DOI: 10.1109/lra.2019.2932575
|View full text |Cite
|
Sign up to set email alerts
|

Deep Active Localization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 31 publications
(15 citation statements)
references
References 24 publications
0
15
0
Order By: Relevance
“…However, ANL [65] bases an assumption that the transition functions in ANL are deterministic, which does not apply well to real robot. Therefore, Gottipati et al [66] proposed a hierarchical likelihood estimation approach which decouples the resolution of the likelihood resolution from the distance that the robot travels, and used advantage actor-critic (A2C) to accomplish localization task for real robot.…”
Section: A Direct Drl Vnavigationmentioning
confidence: 99%
“…However, ANL [65] bases an assumption that the transition functions in ANL are deterministic, which does not apply well to real robot. Therefore, Gottipati et al [66] proposed a hierarchical likelihood estimation approach which decouples the resolution of the likelihood resolution from the distance that the robot travels, and used advantage actor-critic (A2C) to accomplish localization task for real robot.…”
Section: A Direct Drl Vnavigationmentioning
confidence: 99%
“…Tai and Liu [19] Navigation DQN Extrinsic Mirowski et al [20] Navigation A3C Extrinsic Wen et al [22] Navigation D3QN Extrinsic Zhelo et al [26] Goal navigation A3C Intrinsic Oh and Cavallaro [27] Goal navigation A3C Intrinsic Tai et al [35] Goal navigation ADDPG Intrinsic Zhu et al [33] Frontier selection A3C Intrinsic Niroui et al [34] Frontier selection A3C Intrinsic Gottipati et al [31] Active localization A2C Posterior belief Chaplot et al [30] Active localization A3C Posterior belief Chen et al [29] Exploration DQN Entropy Chen et al [32] Active…”
Section: Partial Observabilitymentioning
confidence: 99%
“…In [28,29] actions are selected so as to maximize the entropy reduction (information gain or Kullback-Leibler divergence) of a 2-dimensional occupancy grid map; either by learning in a supervised fashion with labelled cells, or by introducing a metric of it in the DRL's reward function, respectively. Another approach to solve the active localization problem that also acts directly on the reward function design was proposed in [30,31], where the entropy reduction was achieved by including on it the maximum likelihood (ML) of being in any state (belief accuracy). Recently, Chen et al [32] trained both DQN and A2C agents using the underlying SLAM pose-graph through Graph Neural Networks (GNN).…”
Section: Introductionmentioning
confidence: 99%
“…Such approaches can be computationally intensive in general environments. In a novel and promising approach, the navigation policy for active localization is learned using reinforcement learning [Gottipati et al 2019]. Note that active localization is related to the more general problem of simultaneous location and mapping (SLAM) [Rekleitis et al 2006].…”
Section: Related Workmentioning
confidence: 99%