2020 IEEE International Conference on Robotics and Automation (ICRA) 2020
DOI: 10.1109/icra40945.2020.9197374
|View full text |Cite
|
Sign up to set email alerts
|

Visual Odometry Revisited: What Should Be Learnt?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
90
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 135 publications
(90 citation statements)
references
References 37 publications
0
90
0
Order By: Relevance
“…Li et al [ 17 ] proposed the DeepSLAM, which uses a deep recurrent convolutional neural network (RCNN) to simultaneously generate pose estimate, depth map, and outlier rejection mask. Zhang et al [ 31 ] presented a monocular VO system that combines the geometry-based method and the unsupervised deep learning. Liu et al [ 32 ] presented a deep-learning-based RGB-D visual odometry system, which takes RGB image and depth image as input and outputs camera pose through a dual-stream structure of a recurrent convolutional neural network.…”
Section: Related Workmentioning
confidence: 99%
“…Li et al [ 17 ] proposed the DeepSLAM, which uses a deep recurrent convolutional neural network (RCNN) to simultaneously generate pose estimate, depth map, and outlier rejection mask. Zhang et al [ 31 ] presented a monocular VO system that combines the geometry-based method and the unsupervised deep learning. Liu et al [ 32 ] presented a deep-learning-based RGB-D visual odometry system, which takes RGB image and depth image as input and outputs camera pose through a dual-stream structure of a recurrent convolutional neural network.…”
Section: Related Workmentioning
confidence: 99%
“…Because these methods can predict both depth and camera pose, they are wildly used in robotics and selfdriving cars as a visual odometry (VO) system. Zhan et al investigated the end-to-end unsupervised depth-VO [39] and also integrated the depth with Perspective-n-Point (PnP) method to achieve high robustness [40].…”
Section: Related Workmentioning
confidence: 99%
“…For different scenes with different sensors, these methods are difficult to transfer because sensors should be photometrically recalibrated, and also correct uncertainty map formation for matching points is required. Modern enhancements of these approaches are neural network methods that train in a self-supervised manner -D3VO [35], Deep-MatchVO [36], DF-VO [37]. All of them allow generating pose estimation of two neighbor frames a monocular camera and depth map.…”
Section: Visual-based Robot Localizationmentioning
confidence: 99%
“…For our study, the following SLAM metrics were taken: 1) Relative translation (T KIT T I , %) and rotation (R KIT T I , deg/m) errors which are introduced in the KITTI Odometry Benchmark [52], [53]. Because of short indoor tracks in the HISNav Dataset, we use distance subsequences of length (0.25, 0.5, 1, 2, 4, 8, 16, 20) meters instead of conventional (100,200,...,800) distances.…”
Section: B Indoor Robot Localization Using Visual Slammentioning
confidence: 99%