2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00177
|View full text |Cite
|
Sign up to set email alerts
|

One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
135
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 120 publications
(136 citation statements)
references
References 30 publications
1
135
0
Order By: Relevance
“…Weakly Supervised 3D Semantic Segmentation Existing works explore the 3D semantic segmentation with various types of weak supervision, including 2D image [44], subcloud-level [48], segment-level [38], and point-level supervision [11,26,34,52]. The first three types can be grouped into indirect annotations [11].…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…Weakly Supervised 3D Semantic Segmentation Existing works explore the 3D semantic segmentation with various types of weak supervision, including 2D image [44], subcloud-level [48], segment-level [38], and point-level supervision [11,26,34,52]. The first three types can be grouped into indirect annotations [11].…”
Section: Related Workmentioning
confidence: 99%
“…around 22.3 minutes to annotate an indoor scene on average [5]). Thus, weakly * Equal contribution † Corresponding author ) the performance of PointMatch on the ScanNet-v2 and S3DIS datasets over various weakly supervised semantic segmentation settings: annotating 0.01%, 0.1% of points [11], 20 points per-scene [10], and "1thing1click" [26]. (c), (d) a comparison between previous works and the proposed approach.…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations