2015 IEEE International Conference on Robotics and Automation (ICRA) 2015
DOI: 10.1109/icra.2015.7139679
|View full text |Cite
|
Sign up to set email alerts
|

3D Convolutional Neural Networks for landing zone detection from LiDAR

Abstract: We present a system for the detection of small and potentially obscured obstacles in vegetated terrain. The key novelty of this system is the coupling of a volumetric occupancy map with a 3D Convolutional Neural Network (CNN), which to the best of our knowledge has not been previously done. This architecture allows us to train an extremely efficient and highly accurate system for detection tasks from raw occupancy data. We apply this method to the problem of detecting safe landing zones for autonomous helicopt… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
109
0
1

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 194 publications
(110 citation statements)
references
References 23 publications
0
109
0
1
Order By: Relevance
“…One study implemented a 3D CNN for use in airborne LiDAR to identify helicopter landing zones in real time [36]. Others have used them in conjunction with terrestrial LiDAR to map obstacles for autonomous cars [37,38].…”
Section: Introductionmentioning
confidence: 99%
“…One study implemented a 3D CNN for use in airborne LiDAR to identify helicopter landing zones in real time [36]. Others have used them in conjunction with terrestrial LiDAR to map obstacles for autonomous cars [37,38].…”
Section: Introductionmentioning
confidence: 99%
“…Recently, some studies have utilized 3D CNN for learning spatio-temporal features from videos [29,30], learning 3D structures from LiDAR point clouds [31], or learning spatio-spectral presentations from hyperspectral images [32]. In general, 3D CNN is not as widely applied as 2D CNN, as the temporal dimension is usually not considered in computer vision and machine learning.…”
Section: Introductionmentioning
confidence: 99%
“…We used PointNet as a module to extract the initial pointwise and global shape representation mainly due to its efficiency. In general, other point-based modules, or even volumetric [15,20,24] and view-based modules [9,22] for local and global shape processing could be adapted in a similar manner within our architecture. Below we describe the main focus of our work to learn the parameters of the architecture based on part hierarchies and tag data.…”
Section: Methodsmentioning
confidence: 99%