2020
DOI: 10.48550/arxiv.2012.03015
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

CIA-SSD: Confident IoU-Aware Single-Stage Object Detector From Point Cloud

Abstract: Existing single-stage detectors for locating objects in point clouds often treat object localization and category classification as separate tasks, so the localization accuracy and classification confidence may not well align. To address this issue, we present a new single-stage detector named the Confident IoU-Aware Single-Stage object Detector (CIA-SSD). First, we design the lightweight Spatial-Semantic Feature Aggregation module to adaptively fuse high-level abstract semantic features and low-level spatial … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
23
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 16 publications
(23 citation statements)
references
References 16 publications
0
23
0
Order By: Relevance
“…The first and more natural solution is to fit a regular grid onto the point cloud, producing a grid cell representation. Many approaches do so by either quantizing point clouds into 3D volumetric grids (Section 6.3.1) (e.g., Song and Xiao, 2014;Zhou and Tuzel, 2018;Shi et al, 2020b) or by discretizing them to (multi view) projections (Section 6.3.2) (e.g., Li et al, 2016;Chen et al, 2017;Beltrán et al, 2018;Zheng et al, 2020).…”
Section: Grid Cellsmentioning
confidence: 99%
See 4 more Smart Citations
“…The first and more natural solution is to fit a regular grid onto the point cloud, producing a grid cell representation. Many approaches do so by either quantizing point clouds into 3D volumetric grids (Section 6.3.1) (e.g., Song and Xiao, 2014;Zhou and Tuzel, 2018;Shi et al, 2020b) or by discretizing them to (multi view) projections (Section 6.3.2) (e.g., Li et al, 2016;Chen et al, 2017;Beltrán et al, 2018;Zheng et al, 2020).…”
Section: Grid Cellsmentioning
confidence: 99%
“…Exemplary models using BEV representation can be found in the work from Wang et al (2018), Beltrán et al (2018), Liang et al (2018), Yang et al (2018b), Simon et al (2019b), Zeng et al (2018), , Ali et al (2019), He et al (2020), Zheng et al (2020), Liang et al (2020), and.…”
Section: Projection-based Representationmentioning
confidence: 99%
See 3 more Smart Citations