2018
DOI: 10.48550/arxiv.1805.04902
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

LMNet: Real-time Multiclass Object Detection on CPU using 3D LiDAR

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 11 publications
0
3
0
Order By: Relevance
“…Complex-YOLO [12], BirdNet [13], PIXOR [14] map point cloud into Bird's Eye View (BEV). LMNet [15], VeloFCN [16] takes the frontal view (FV) of point cloud as input. MV3D [17] adopts both BEV and FV of point cloud as input.…”
Section: Related Workmentioning
confidence: 99%
“…Complex-YOLO [12], BirdNet [13], PIXOR [14] map point cloud into Bird's Eye View (BEV). LMNet [15], VeloFCN [16] takes the frontal view (FV) of point cloud as input. MV3D [17] adopts both BEV and FV of point cloud as input.…”
Section: Related Workmentioning
confidence: 99%
“…The Frustrum-PointNet of Qi et al [27] and the work of Du et al [6] operate directly on the point clouds themselves, considering a subset of points which lie within a frustrum defined by a 2D bounding box on the image. Minemura et al [22] and Li et al [16] instead project the point cloud onto the image plane and apply Faster-RCNN-style architectures to the resulting RGB-D images. Other methods, such as TopNet [33], BirdNet [1] and Yu et al [37], discretize the point cloud into some birds-eye-view (BEV) representation which encodes features such as returned intensity or average height of points above the ground plane.…”
Section: Related Workmentioning
confidence: 99%
“…This has led to 3D bounding box detection emerging as an important problem in computer vision and robotics, particularly in the context of autonomous driving. To date the 3D object detection literature has been dominated by approaches which make use of rich LiDAR point clouds [37,33,15,27,5,6,22,1], while the performance of image-only methods, which lack the absolute depth information of LiDAR, lags significantly behind. Given the high cost of existing LiDAR units, the sparsity of LiDAR point clouds at long ranges, and the need for sensor redundancy, accurate 3D object detection from Figure 1.…”
Section: Introductionmentioning
confidence: 99%