2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) 2019
DOI: 10.1109/iccvw.2019.00494
|View full text |Cite
|
Sign up to set email alerts
|

Multi-View PointNet for 3D Scene Understanding

Abstract: Fusion of 2D images and 3D point clouds is important because information from dense images can enhance sparse point clouds. However, fusion is challenging because 2D and 3D data live in different spaces. In this work, we propose MVPNet (Multi-View PointNet), where we aggregate 2D multi-view image features into 3D point clouds, and then use a point based network to fuse the features in 3D canonical space to predict 3D semantic labels. To this end, we introduce view selection along with a 2D-3D feature aggregati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
72
0
1

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 154 publications
(73 citation statements)
references
References 32 publications
0
72
0
1
Order By: Relevance
“…Representation Learning on Point Clouds. Recently representation learning on point clouds has drawn lots of attention for improving the performance of point cloud classification and segmentation [10], [18], [19], [68], [69], [70], [71], [72], [73], [74], [75], [76], [77], [78], [79]. In terms of 3D detection, previous methods generally project the point clouds to regular bird view grids [9], [12] or 3D voxels [10], [80] for processing point clouds with 2D/3D CNN.…”
Section: D Object Detection With Point Cloudsmentioning
confidence: 99%
“…Representation Learning on Point Clouds. Recently representation learning on point clouds has drawn lots of attention for improving the performance of point cloud classification and segmentation [10], [18], [19], [68], [69], [70], [71], [72], [73], [74], [75], [76], [77], [78], [79]. In terms of 3D detection, previous methods generally project the point clouds to regular bird view grids [9], [12] or 3D voxels [10], [80] for processing point clouds with 2D/3D CNN.…”
Section: D Object Detection With Point Cloudsmentioning
confidence: 99%
“…PointCNN [25], PointConv [53]) by 2-10% in the mean IoU evaluations. This is because [6] 41.72 64.62 PVConv [27] 52.3 -TangentConv [44] 52.80 60.70 3D RNN [59] 53.40 71.30 PointCNN [25] 57.26 63.86 SuperpointGraph [22] 58.04 66.50 MinkNet32 [3] 65.35 71.71 KPConv [46] 67 [37] 33.9 TangetConv [44] 43.8 SurfaceConv [32] 44.2 MVPNet [16] 64.1 PointConv [53] 66.6 PointASNL [56] 66.6 MinkNet42 (5cm) [3] 67.9 KPConv [46] 68.4 FusionNet (5cm)…”
Section: ) 3dsis and Scannetmentioning
confidence: 99%
“…Extending the same idea and to capture the contextual information of local patterns inside point clouds, PointNet++ [7] applies sampling and grouping operations to extract features from point clusters hierarchically. In recent years, many networks for 3D point clouds were inspired by PointNet++ [7], such as [8][9][10]. A complete review of deep learning methods for point clouds can be found in [11].…”
Section: Point Cloudsmentioning
confidence: 99%
“…PointNet [6] and PointNet++ [7] have made fundamental improvements in this case to train a model directly through the 3D point clouds. Recently, many network architectures [8][9][10][11] were inspired by these pioneer techniques [6,7]. However, applying CNNs for the edge detection problem in point clouds is still challenging.…”
Section: Introductionmentioning
confidence: 99%