2018
DOI: 10.3390/s18041159
|View full text |Cite
|
Sign up to set email alerts
|

PL-VIO: Tightly-Coupled Monocular Visual–Inertial Odometry Using Point and Line Features

Abstract: To address the problem of estimating camera trajectory and to build a structural three-dimensional (3D) map based on inertial measurements and visual observations, this paper proposes point–line visual–inertial odometry (PL-VIO), a tightly-coupled monocular visual–inertial odometry system exploiting both point and line features. Compared with point features, lines provide significantly more geometrical structure information on the environment. To obtain both computation simplicity and representational compactn… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
148
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 243 publications
(148 citation statements)
references
References 44 publications
0
148
0
Order By: Relevance
“…The (right) invariant Kalman filter [56] was recently employed to improve filter consistency [25,57,58,59,60], as well as the (iterated) EKF that was also used for VINS in robocentric formulations [22,61,62,63]. On the other hand, in the EKF framework, different geometric features besides points have also been exploited to improve VINS performance, for example, line features used in [64,65,66,67,68] and plane features in [69,70,71,72]. In addition, the MSCKF-based VINS was also extended to use rolling-shutter cameras with inaccurate time synchronization [64,73], RGBD cameras [69,74], multiple cameras [53,75,76] and multiple IMUs [77].…”
Section: Filtering-based Vs Optimization-based Estimationmentioning
confidence: 99%
“…The (right) invariant Kalman filter [56] was recently employed to improve filter consistency [25,57,58,59,60], as well as the (iterated) EKF that was also used for VINS in robocentric formulations [22,61,62,63]. On the other hand, in the EKF framework, different geometric features besides points have also been exploited to improve VINS performance, for example, line features used in [64,65,66,67,68] and plane features in [69,70,71,72]. In addition, the MSCKF-based VINS was also extended to use rolling-shutter cameras with inaccurate time synchronization [64,73], RGBD cameras [69,74], multiple cameras [53,75,76] and multiple IMUs [77].…”
Section: Filtering-based Vs Optimization-based Estimationmentioning
confidence: 99%
“…As mentioned earlier, vision-aided INS (VINS) arguably is among the most popular localization methods in particular for resource-constrained sensor platforms such as mobile devices and micro aerial vehicles (MAVs) navigating in GPS-denied environments (e.g., see [26,27,10,28]). While most current VINS algorithms focus on using point features (e.g., [7,8,9,10]), line and plane features may not be blindly discarded in structured environments [29,30,31,32,33,34,35,36,24], in part because: (i) they are ubiquitous and compact in many urban or indoor environments (e.g., doors, walls, and stairs), (ii) they can be detected and tracked over a relatively long time period, and (iii) they are more robust in texture-less environments compared to point features.…”
Section: Aided Ins With Points Lines and Planesmentioning
confidence: 99%
“…Stacking (29), (32) and (33) yields the complete the measurement Jacobian of the plane measurement w.r.t. the state (1):…”
Section: Closest Point (Cp) Parameterizationmentioning
confidence: 99%
“…Depending on the number of sensors, visual SLAM can be classified into monocular-camera- [8][9][10][11], stereo-camera- [12][13][14], and multiple-camera- [15,16] based versions. Monocular camera based visual SLAM cannot obtain scale information of the environment, so it is usually combined with inertial measurement units (IMUs) to obtain three dimensional (3D) information in the scene [17]. As for stereo-camera-and multiple-camera-based SLAM systems, they can obtain 3D coordinates of feature points by photogrammetry.…”
Section: Introductionmentioning
confidence: 99%