Proceedings of the 29th ACM International Conference on Multimedia 2021
DOI: 10.1145/3474085.3475641
|View full text |Cite
|
Sign up to set email alerts
|

Neighbor-Vote

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 16 publications
(2 citation statements)
references
References 19 publications
0
2
0
Order By: Relevance
“…The architectures for monocular 3D object detection can be mainly categorized into two groups: Pseudo-LiDAR based methods [11,34,43] that use pre-trained depth networks to generate pseudo LiDAR representations, e.g. pseudo point clouds and pseudo voxels, and then feed them to LiDAR-based 3D detectors, and the rest monocular 3D detection methods that use 2D feature learning from a single image with optional 3D cues matching, concatenating or guiding for 3D perception [24,26,31,38,39,54,57].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The architectures for monocular 3D object detection can be mainly categorized into two groups: Pseudo-LiDAR based methods [11,34,43] that use pre-trained depth networks to generate pseudo LiDAR representations, e.g. pseudo point clouds and pseudo voxels, and then feed them to LiDAR-based 3D detectors, and the rest monocular 3D detection methods that use 2D feature learning from a single image with optional 3D cues matching, concatenating or guiding for 3D perception [24,26,31,38,39,54,57].…”
Section: Related Workmentioning
confidence: 99%
“…However, the huge potential in such a cheap and easy-to-deploy solution to 3D detection attracts more and more researchers. Remarkable progress has been made in Pseudo-LiDAR detectors [11,29,34,43,52] that use a pre-trained depth estimation network to generate Pseudo-LiDAR representations, e.g. pseudo point clouds and pseudo voxels, and then feed them to LiDAR-based 3D detectors.…”
Section: Introductionmentioning
confidence: 99%