2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2018
DOI: 10.1109/iros.2018.8593573
|View full text |Cite
|
Sign up to set email alerts
|

Vision-Aided Absolute Trajectory Estimation Using an Unsupervised Deep Network with Online Error Correction

Abstract: We present an unsupervised deep neural network approach to the fusion of RGB-D imagery with inertial measurements for absolute trajectory estimation. Our network, dubbed the Visual-Inertial-Odometry Learner (VIOLearner), learns to perform visual-inertial odometry (VIO) without inertial measurement unit (IMU) intrinsic parameters (corresponding to gyroscope and accelerometer bias or white noise) or the extrinsic calibration between an IMU and camera. The network learns to integrate IMU measurements and generate… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 27 publications
(26 citation statements)
references
References 20 publications
0
26
0
Order By: Relevance
“…However assessing the usefulness for a method for localization requires evaluating its accuracy in predicting location. A common metric for that is average relative translational drift t rel [33,49] -the distance between the predicted location and the groundtruth location divided by distance traveled and averaged over the trajectory. Table 6 summarizes both metrics, demonstrating the improvements our method achieves on both.…”
Section: Odometrymentioning
confidence: 99%
See 1 more Smart Citation
“…However assessing the usefulness for a method for localization requires evaluating its accuracy in predicting location. A common metric for that is average relative translational drift t rel [33,49] -the distance between the predicted location and the groundtruth location divided by distance traveled and averaged over the trajectory. Table 6 summarizes both metrics, demonstrating the improvements our method achieves on both.…”
Section: Odometrymentioning
confidence: 99%
“…10 Metric ATE t rel ATE t rel Zhou [50] 0.021 17.84% 0.020 37.91% GeoNet [48] 0.012 / 0.012 / Zhan [49] / 11.92% / 12.45% Mahjourian [25] 0.013 / 0.012 / Struct2depth [7] 0. Table 6: Absolute Trajectory Error (ATE) [50] and average relative translational drift (t rel ) [33] on the 09 and 10 KITTI odometry sequences. Our method with both learned and given intrinsics is compared to prior work.…”
Section: Odometrymentioning
confidence: 99%
“…To the best of our knowledge, [12] is the first unsupervised VIO system. The network learns to integrate IMU measurements and generate trajectories which are then corrected online according to the Jacobians of scaled image projection errors with respect to a spatial grid of pixel coordinates.…”
Section: Unsupervised Learning Methodsmentioning
confidence: 99%
“…In our experiment, sequences 00-08 are used for training and 09-10 are used for testing. Note that sequence 03 is abandoned since its IMU data is not available in KITTI Raw Data, In addition, 5% of KITTI sequences 00-08 are selected as a validation set, which is the same to [12].…”
Section: B Network Trainingmentioning
confidence: 99%
“…Rambach [117] proposed a deep learning approach to visual-inertial camera pose estimation through a trained short-term memory model. Shamwell [118] presented an unsupervised deep neural network approach to the fusion of RGB-D imagery with inertial measurements for absolute trajectory estimation.…”
Section: Slam With Deep Learningmentioning
confidence: 99%