2017
DOI: 10.48550/arxiv.1705.10422
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning End-to-end Multimodal Sensor Policies for Autonomous Navigation

Abstract: Multisensory polices are known to enhance both state estimation and target tracking. However, in the space of end-to-end sensorimotor control, this multi-sensor outlook has received limited attention. Moreover, systematic ways to make policies robust to partial sensor failure are not well explored. In this work, we propose a specific customization of Dropout, called Sensor Dropout, to improve multisensory policy robustness and handle partial failure in the sensorset. We also introduce an additional auxiliary l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 12 publications
(14 citation statements)
references
References 12 publications
0
14
0
Order By: Relevance
“…There have been several recent works that can perform inference or complete downstream task in the presence of missing modalities [23,24,[33][34][35]37]. However, they need to know what modality is missing at inference time.…”
Section: Introductionmentioning
confidence: 99%
“…There have been several recent works that can perform inference or complete downstream task in the presence of missing modalities [23,24,[33][34][35]37]. However, they need to know what modality is missing at inference time.…”
Section: Introductionmentioning
confidence: 99%
“…[21] adopts RGB and depth image to estimate the surface normal of the object. [6] [14] [22] [23] [24] [25] based on RGB image with depth or point cloud to predict grasping policy. [26] [27] [28] [29] fuses RGB and haptic data to train a grasping network.…”
Section: B Multimodal Perceptionmentioning
confidence: 99%
“…Though state-dependent intermediate costs are not commonlyseen in supervised problems until recently [75], it has been used extensively in the context of deep reinforcement learning to guide or stabilize training, e.g. the auxiliary tasks and losses [76], [77].…”
Section: Training Dnn With Optimal Controlmentioning
confidence: 99%