2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC) 2022
DOI: 10.1109/itsc55140.2022.9921947
|View full text |Cite
|
Sign up to set email alerts
|

PillarGrid: Deep Learning-Based Cooperative Perception for 3D Object Detection from Onboard-Roadside LiDAR

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 35 publications
(14 citation statements)
references
References 21 publications
0
14
0
Order By: Relevance
“…Compared with the voxel-based approach, the pillar-based approach [3], [8], [9], [17], [19], [22] aims to reduce the time consumption during inference. These methods adjust the grid height to be equivalent to the height of the 3D space during the point cloud voxelization, thereby directly transforming the point cloud from the 3D shape to the 2D form in the BEV space.…”
Section: B Pillar-based Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Compared with the voxel-based approach, the pillar-based approach [3], [8], [9], [17], [19], [22] aims to reduce the time consumption during inference. These methods adjust the grid height to be equivalent to the height of the 3D space during the point cloud voxelization, thereby directly transforming the point cloud from the 3D shape to the 2D form in the BEV space.…”
Section: B Pillar-based Methodsmentioning
confidence: 99%
“…Pillar-based PointPillar [3] Pooling Pillar-OD [8] Pooling MuRF-Net [9] Pooling InfoFocus [10] Pooling CenterPoint-pillar [13] Pooling WCNN3D [17] Pooling CVFNet [18] Pooling Pillargrid [19] Pooling PillarNet [22] Pooling…”
Section: Dimensionality Reductionmentioning
confidence: 99%
“…However, most state-of-the-art object detection methods using point clouds are designed around a single onboard LiDAR, which is inherently limited by its range and susceptibility to occlusion, particularly in dense traffic scenarios. To address this, Bai et al [ 29 ] developed PillarGrid, a novel cooperative perception framework that combines information from multiple 3D LiDARs (both onboard and roadside) to enhance situational awareness for connected and automated vehicles (CAVs). PillarGrid comprises four key components: (i) cooperative preprocessing of point clouds, (ii) pillar-wise voxelization and feature extraction, (iii) grid-wise deep fusion of features from multiple sensors, and (iv) augmented 3D object detection using a convolutional neural network (CNN).…”
Section: Transition From Classical Perception To Collaborative Percep...mentioning
confidence: 99%
“…Predictive Radars take small data of the current scenes as input and perform general perception tasks such as PointPillars [70] for object detection and PointNet++ [71] for segmentation at first. The emerging cooperative perception [72][73][74][75] can greatly increase the perception range of each vehicle and deserves more attention in the future. Based on the results of general perception tasks, Predictive Radars will predict different future scenes which is the transition from small data to big data.…”
Section: Predictive Radarsmentioning
confidence: 99%
“…The high cost of collecting data and long-tail problem are serious issues. With the complete artificial systems in cyber space, Parallel Radars can generate sufficient virtual data to train new models for different downstream tasks such as object detection [26,27], segmentation [28][29][30][31], and cooperative perception [72][73][74][75], which can solve these problems effectively. Specific tasks such as the validation of new radars [94], super-resolution [95][96][97], and the analysis of radar placement [98,99], can also be settled.…”
Section: Autonomous Drivingmentioning
confidence: 99%