Accurate and reliable perception systems are essential for autonomous driving and robotics. To achieve this, 3D object detection with multi-sensors is necessary. Existing 3D detectors have significantly improved accuracy by adopting a two-stage paradigm that relies solely on LiDAR point clouds for 3D proposal refinement. However, the sparsity of point clouds, particularly for faraway points, makes it difficult for the LiDAR-only refinement module to recognize and locate objects accurately. To address this issue, we propose a novel multi-modality two-stage approach called FusionRCNN. This approach effectively and efficiently fuses point clouds and camera images in the Regions of Interest (RoI). The FusionRCNN adaptively integrates both sparse geometry information from LiDAR and dense texture information from the camera in a unified attention mechanism. Specifically, FusionRCNN first utilizes RoIPooling to obtain an image set with a unified size and gets the point set by sampling raw points within proposals in the RoI extraction step. Then, it leverages an intra-modality self-attention to enhance the domain-specific features, followed by a well-designed cross-attention to fuse the information from two modalities. FusionRCNN is fundamentally plug-and-play and supports different one-stage methods with almost no architectural changes. Extensive experiments on KITTI and Waymo benchmarks demonstrate that our method significantly boosts the performances of popular detectors. Remarkably, FusionRCNN improves the strong SECOND baseline by 6.14% mAP on Waymo and outperforms competing two-stage approaches.
Fine-grained geometry, captured by aggregation of point features in local regions, is crucial for object recognition and scene understanding in point clouds. Nevertheless, existing preeminent point cloud backbones usually incorporate max/average pooling for local feature aggregation, which largely ignores points' positional distribution, leading to inadequate assembling of fine-grained structures. To mitigate this bottleneck, we present an efficient alternative to max pooling, Position Adaptive Pooling (PAPooling), that explicitly models spatial relations among local points using a novel graph representation, and aggregates features in a position adaptive manner, enabling position-sensitive representation of aggregated features. Specifically, PAPooling consists of two key steps, Graph Construction and Feature Aggregation, respectively in charge of constructing a graph with edges linking the center point with every neighboring point in a local region to map their relative positional information to channel-wise attentive weights, and adaptively aggregating local point features based on the generated weights through Graph Convolution Network (GCN). PAPooling is simple yet effective, and flexible enough to be ready to use for different popular backbones like PointNet++ and DGCNN, as a plug-andplay operator. Extensive experiments on various tasks ranging from 3D shape classification, part segmentation to scene segmentation well demonstrate that PAPooling can significantly improve predictive accuracy, while with minimal extra computational overhead. Code will be released.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.