2021
DOI: 10.1109/access.2021.3131389
|View full text |Cite
|
Sign up to set email alerts
|

BirdNet+: Two-Stage 3D Object Detection in LiDAR Through a Sparsity-Invariant Bird’s Eye View

Abstract: Autonomous navigation relies upon an accurate understanding of the elements in the surroundings. Among the different on-board perception tasks, 3D object detection allows the identification of dynamic objects that cannot be registered by maps, being key for safe navigation. Thus, it often requires the use of LiDAR data, which is able to faithfully represent the scene geometry. However, although raw laser point clouds contain rich features to perform object detection, more compact representations such as the bi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 41 publications
(64 reference statements)
0
3
0
Order By: Relevance
“…A commonly used approach to overcome these challenges is using bird's eye view (BEV) geometry projection, which provides a horizontal perspective from an elevated position. Various approaches have been developed based on this method, including the PIXOR [35], BirdNet [36], BirdNet+ [37], [38], BEVDetNet [39], and Frustum-PointPillars approach [40].…”
Section: A Motivationmentioning
confidence: 99%
See 1 more Smart Citation
“…A commonly used approach to overcome these challenges is using bird's eye view (BEV) geometry projection, which provides a horizontal perspective from an elevated position. Various approaches have been developed based on this method, including the PIXOR [35], BirdNet [36], BirdNet+ [37], [38], BEVDetNet [39], and Frustum-PointPillars approach [40].…”
Section: A Motivationmentioning
confidence: 99%
“…For instance, in the pedestrian detection, BEV implementation might be challenged with limited vertical data. Moreover, as the result of this method is a bounding box that may expose unnecessary surrounding points, it can lead to incomplete or inaccurate data representation [37], [38].…”
Section: A Motivationmentioning
confidence: 99%
“…On the one hand, the object detection method based on vehicle LiDAR is inspired by the image object detection method. Point clouds are mapped into a bird’s-eye view (such as BirdNet [ 71 ], BirdNet + [ 72 ], PIXOR [ 73 ], and YOLO3D [ 74 ]) or projected to a front view based on the horizontal and vertical angles of the points (such as LaserNet [ 75 ], FVNet [ 76 ], RangeDet [ 77 ]) to obtain a structured data representation, which is then fed into a feedforward convolutional neural network for 3D object detection. Although the projection-based method benefits from the mature 2D detector, it inevitably loses 3D spatial information due to the spatial quantization coding.…”
Section: Object Detection Based On Roadside Lidarmentioning
confidence: 99%