2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI) 2017
DOI: 10.1109/mfi.2017.8170407
|View full text |Cite
|
Sign up to set email alerts
|

A deep neural network approach to fusing vision and heteroscedastic motion estimates for low-SWaP robotic applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
2
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 17 publications
0
5
0
Order By: Relevance
“…Increased mean predictive performance of of pixel-position RMSE compared to our previous approach [ 4 ] with identical runtime after pruning (158 Hz); and…”
Section: Introductionmentioning
confidence: 92%
See 2 more Smart Citations
“…Increased mean predictive performance of of pixel-position RMSE compared to our previous approach [ 4 ] with identical runtime after pruning (158 Hz); and…”
Section: Introductionmentioning
confidence: 92%
“…With the original DEs fast runtime, we saw the possibility of generating many different hypothetical outputs for each input image and then selecting the most accurate at execution time. By learning how to produce n image reconstruction predictions, we have extended DE into the Multi-Hypothesis DeepEfference (MHDE) [ 4 ] architecture to better handle real-world noise sources.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The final level of VIOLearner employs multi-hypothesis pathways similar to [20], [21] where several possible hypotheses for the reconstructions of a target image (and the associated transformations θm , m ∈ M which generated those reconstructions) are computed in parallel. The lowest error hypothesis reconstruction is chosen during each network run and the corresponding affine matrix θm * which generated the winning reconstruction is output as the final network estimate of camera pose change between images I j and I j+1 .…”
Section: Level N and Multi-hypothesis Pathwaysmentioning
confidence: 99%
“…Error for this last multi-hypothesis level is computed according to a winner-take-all (WTA) Euclidean loss rule (see [20] for more detail and justifications):…”
Section: Level N and Multi-hypothesis Pathwaysmentioning
confidence: 99%