2014 CACS International Automatic Control Conference (CACS 2014) 2014
DOI: 10.1109/cacs.2014.7097178
|View full text |Cite
|
Sign up to set email alerts
|

Target localization using RGB-D camera and LiDAR sensor fusion for relative navigation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 13 publications
0
6
0
Order By: Relevance
“…These studies can be grouped depending on their application areas and 3D modeling techniques. Some of the applications of 3D modeling presented in the literature include 3D building modeling [28], 3D city modeling [29], automatic registration for georeferencing [30], railroad center line reconstruction [31], automatic building extraction [32], scene parsing [33], elevation mapping [34], species recognition, height and crown width estimation [35], target localization and relative navigation [36], and visual localization [37]. Modeling techniques used in these studies involve automatic aerial triangulation, coarseto-fine methods [28], digital surface nodes application [29], the iterative closest-point (ICP) algorithm [30], the random sample consensus algorithm [31], the binary space partitioning (BSP) tree [32], the Markov-random-field-based temporal method [33], fuzzy logic and particles [34], multi-scale template matching (MSTM) [35], dynamic bias estimation [36], and localization-by-recognition and vocabulary-tree-based recognition methods [37].…”
Section: Related Workmentioning
confidence: 99%
“…These studies can be grouped depending on their application areas and 3D modeling techniques. Some of the applications of 3D modeling presented in the literature include 3D building modeling [28], 3D city modeling [29], automatic registration for georeferencing [30], railroad center line reconstruction [31], automatic building extraction [32], scene parsing [33], elevation mapping [34], species recognition, height and crown width estimation [35], target localization and relative navigation [36], and visual localization [37]. Modeling techniques used in these studies involve automatic aerial triangulation, coarseto-fine methods [28], digital surface nodes application [29], the iterative closest-point (ICP) algorithm [30], the random sample consensus algorithm [31], the binary space partitioning (BSP) tree [32], the Markov-random-field-based temporal method [33], fuzzy logic and particles [34], multi-scale template matching (MSTM) [35], dynamic bias estimation [36], and localization-by-recognition and vocabulary-tree-based recognition methods [37].…”
Section: Related Workmentioning
confidence: 99%
“…Ref. [17] describes an approach to estimate the position of targets based on fusion of RGB-D camera and 2D LiDAR sensor measurements. Other authors [18,19] demonstrate instead the effectiveness of an RGB-D camera for obstacle avoidance tasks.…”
Section: Related Workmentioning
confidence: 99%
“…Sensors that are most frequently used (i.e., [17][18][19]) are the Intel RealSense [20] and the Kinect [21], both characterized by a limited depth range and field of view. Thus, in addition to the local path planning algorithm strategies listed in [7,10], there is also a new category that is gaining momentum in local navigation and it is associated with the field of collision avoidance with limited field of view sensing [3][4][5].…”
Section: Related Workmentioning
confidence: 99%
“…5 One can certainly imagine scenarios where combinations of these sensors would allow for more robust and accurate estimations of range (or depth) to surrounding objects. [6][7][8] Without loss of generalization, we will focus on the depth measurements obtained from a LIDAR, such as the Velodyne or Slamtech LIDAR, in this paper. The experiments we conduct in our investigations are based on indoor experiments with a dual-differential drive robot with a Slamtech RPLidar S1.…”
Section: Introductionmentioning
confidence: 99%