2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2020
DOI: 10.1109/iros45743.2020.9341450
|View full text |Cite
|
Sign up to set email alerts
|

MVLidarNet: Real-Time Multi-Class Scene Understanding for Autonomous Driving Using Multiple Views

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 33 publications
(16 citation statements)
references
References 18 publications
0
16
0
Order By: Relevance
“…In another approach, 3D-MiniNet [2] proposes a learning-based projection module to extract local and global information from the 3D data and then feeds it to a 2D FCNN in order to generate semantic segmentation predictions. In a slightly different approach, MVLidarNet [7] benefits form range-image LiDAR semantic segmentation to refine object instances in bird's-eye-view perspective, showcasing the applicability of LiDAR semantic segmentation in real-world applications.…”
Section: Hybrid Methodsmentioning
confidence: 99%
“…In another approach, 3D-MiniNet [2] proposes a learning-based projection module to extract local and global information from the 3D data and then feeds it to a 2D FCNN in order to generate semantic segmentation predictions. In a slightly different approach, MVLidarNet [7] benefits form range-image LiDAR semantic segmentation to refine object instances in bird's-eye-view perspective, showcasing the applicability of LiDAR semantic segmentation in real-world applications.…”
Section: Hybrid Methodsmentioning
confidence: 99%
“…A line of works [4,18] realize multi-view fusion either by aggregating features to refine proposals or fusing features in the region constrained by the spatial projection. [7,17] fuse the ROI features from point cloud and camera image for proposals refinement.…”
Section: Multi-view 3d Detectionmentioning
confidence: 99%
“…The authors of [ 24 ] extract features from both views and use an early fusion approach. Segmenting the spherical image first and projecting the results into the bird’s-eye view for further processing is done in [ 25 ]. Ref.…”
Section: Related Workmentioning
confidence: 99%