2015
DOI: 10.1002/rob.21606
|View full text |Cite
|
Sign up to set email alerts
|

Vision‐based Localization and Robot‐centric Mapping in Riverine Environments

Abstract: This paper presents a vision-based localization and mapping algorithm developed for an unmanned aerial vehicle (UAV) that can operate in a riverine environment. Our algorithm estimates the three-dimensional positions of point features along a river and the pose of the UAV. By detecting features surrounding a river and the corresponding reflections on the water's surface, we can exploit multiple-view geometry to enhance the observability of the estimation system. We use a robot-centric mapping framework to furt… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
34
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
6
1

Relationship

4
3

Authors

Journals

citations
Cited by 37 publications
(34 citation statements)
references
References 59 publications
0
34
0
Order By: Relevance
“…Instead, the robots must rely on on-board sensors, such as cameras, lidars, and inertial measurement units (IMUs). Cameras and lidars are exteroceptive sensors, relying on external features to provide incremental pose estimates [219]. On the other hand, IMUs are interoceptive sensors, providing high-frequency velocity and attitude feedback for the purpose of real-time control.…”
Section: Pose and State Estimationmentioning
confidence: 99%
“…Instead, the robots must rely on on-board sensors, such as cameras, lidars, and inertial measurement units (IMUs). Cameras and lidars are exteroceptive sensors, relying on external features to provide incremental pose estimates [219]. On the other hand, IMUs are interoceptive sensors, providing high-frequency velocity and attitude feedback for the purpose of real-time control.…”
Section: Pose and State Estimationmentioning
confidence: 99%
“…In our previous work [19], we used a decoupled set of nonlinear observers [20] with depth parametrization, opposed to the inversedistance with initial view measurements used in this paper, and showed simulation results, along with limited experimental results of only the boundary estimation and landmark mapping. In this paper we employ an omnidirectional camera as our primary sensor to solve 6-DOF localization and 3D mapping by using a robot-centric framework with initial view measurements, which is introduced in our prior work [4]. We apply the results to detect the containment of our robotic mower.…”
Section: Related Workmentioning
confidence: 99%
“…Note that we parametrize the landmarks with a unit vector and an inverse-distance from the robot instead of a normalized pixel coordinates and a depth along the optical axis, which we used in [4]. We are able to have a continuous parametrization of the landmarks that are acquired through an omnidirectional camera.…”
Section: B Motion Model For Robot-centric Mappingmentioning
confidence: 99%
See 2 more Smart Citations