Proceedings of the 28th ACM International Conference on Multimedia 2020
DOI: 10.1145/3394171.3413829
|View full text |Cite
|
Sign up to set email alerts
|

CF-SIS: Semantic-Instance Segmentation of 3D Point Clouds by Context Fusion with Self-Attention

Abstract: 3D Semantic-Instance Segmentation (SIS) is a newly emerging research direction that aims to understand visual information of 3D scene on both semantic and instance level. The main difficulty lies in how to coordinate the paradox between mutual aid and suboptimal problem.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
4
1

Relationship

3
6

Authors

Journals

citations
Cited by 40 publications
(16 citation statements)
references
References 35 publications
0
16
0
Order By: Relevance
“…PointNet [32] and Point-Net++ [33] are the pioneer works to understand this kind of irregular data. After that, lots of studies [27,39] are proposed to improve the interpretability of network for point clouds in different tasks, such as segmentation [28,29,37], classification [28,29,37], reconstruction [9,11,14,18], completion [15,16,38]. Besides, the learned deep features of a single point or the whole shape could also be applied to 3D shape based cross-modal applications, for example, shape-to-text matching in our case.…”
Section: Point-based 3d Deep Learningmentioning
confidence: 99%
“…PointNet [32] and Point-Net++ [33] are the pioneer works to understand this kind of irregular data. After that, lots of studies [27,39] are proposed to improve the interpretability of network for point clouds in different tasks, such as segmentation [28,29,37], classification [28,29,37], reconstruction [9,11,14,18], completion [15,16,38]. Besides, the learned deep features of a single point or the whole shape could also be applied to 3D shape based cross-modal applications, for example, shape-to-text matching in our case.…”
Section: Point-based 3d Deep Learningmentioning
confidence: 99%
“…The mIoU is obtained by averaging the IoU of all shapes. We compare our model against the state-of-theart point-based methods [14,28,18,51,37,34,11,30,47,3,54,52,20,19,21,49,22], voxel-based methods [6] and the newest point-voxel-based model [23]. To better balance the trade-off between time efficiency and accuracy, we also reduce the output feature channels to 50% and 25%, and marked as MVPCNN (0.5×Ch) and MVPCNN (0.25×Ch) respectively.…”
Section: Part Segmentationmentioning
confidence: 99%
“…Apart from mIoU, mean accuracy (mAcc) is also used to evaluate the performance of our proposed model. In addition to comparing with the state-of-the-art pointbased [28,47,20,11,30,40,56,42,19,53,21,49] and voxel-based methods [6], we also compare with the newest point-voxel-based model [23]. Table 2 shows the results of all methods on S3DIS dataset, and Figure 4…”
Section: Part Segmentationmentioning
confidence: 99%
“…Deep learning models have been playing an important role in different 3D computer vision applications [54,49,52,65,27,17,60,4,58,28,3,50,18,23,21,24,25,29,30,20,32,26,22,47,46,31,34,33,19,64]. In the following, we will briefly review work related to learning implicit functions for 3D shapes in different ways.…”
Section: Related Workmentioning
confidence: 99%