Robotics: Science and Systems IX 2013
DOI: 10.15607/rss.2013.ix.021
|View full text |Cite
|
Sign up to set email alerts
|

Exploiting Urban Scenes for Vision-aided Inertial Navigation

Abstract: Abstract-This paper addresses the problem of visual-inertial navigation when processing camera observations of both point and line features detected within a Manhattan world. First, we prove that the observations of: (i) a single point, and (ii) a single line of known direction perpendicular to gravity (e.g., a non-vertical structural line of a building), provide sufficient information for rendering all degrees of freedom of a visionaided inertial navigation system (VINS) observable, up to global translations.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 20 publications
(9 citation statements)
references
References 19 publications
0
9
0
Order By: Relevance
“…Most existing methods explore only the partial information of structural regularity -they either use straight lines without considering their prior orientation, or use the prior orientation without putting lines as extra measurements for better estimation. A few of existing methods consider both aspects [11] [23]. In [11], the lines with prior orientation are named as structural lines and treated them as landmarks the same as point features for visual SLAM.…”
Section: Related Workmentioning
confidence: 99%
“…Most existing methods explore only the partial information of structural regularity -they either use straight lines without considering their prior orientation, or use the prior orientation without putting lines as extra measurements for better estimation. A few of existing methods consider both aspects [11] [23]. In [11], the lines with prior orientation are named as structural lines and treated them as landmarks the same as point features for visual SLAM.…”
Section: Related Workmentioning
confidence: 99%
“…Related to our approach [7] proposed to use observations of lines with a known direction along with point feature measurements to remove the unobservability around yaw. In their method they directly observe image lines and assign them to a world direction using a Mahalanobis check, for which they need to initially align the system's yaw with one of the building's VPs.…”
Section: A Visual Inertial Fusionmentioning
confidence: 99%
“…Under several motion profiles this will lead to a loss in accuracy for yaw [6]. This may lead to accumulation of errors along these unobservable states which will tend to grow linearly with time [7].…”
Section: Introductionmentioning
confidence: 99%
“…There is also a huge body of works using vision algorithms to help perform different robotic tasks [13,38,16,30], such as object grasping [42,11,29], navigation [6,26], trajectory control [44], and activity anticipation [23]. Many works focused on improving SLAM techniques to better depict an environment for planning and navigation [34,28], such as incremental smoothing and mapping using the Bayes Tree [21], real-time visual SLAM over large-scale environments [46], and object level SLAM [40].…”
Section: Related Workmentioning
confidence: 99%