2021
DOI: 10.48550/arxiv.2111.10701
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Self-Supervised Point Cloud Completion via Inpainting

Abstract: When navigating in urban environments, many of the objects that need to be tracked and avoided are heavily occluded. Planning and tracking using these partial scans can be challenging. The aim of this work is to learn to complete these partial point clouds, giving us a full understanding of the object's geometry using only partial observations. Previous methods achieve this with the help of complete, ground-truth annotations of the target objects, which are available only for simulated datasets. However, such … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 24 publications
0
4
0
Order By: Relevance
“…The auto-encoding learns the basic shape of the object and uses it to fool the discriminator by minimising the gap between realscene and artificial data. PointPnCNet [101] uses inpainting, where a portion of the input data is removed and the network trains to complete the point cloud including the missing region. PointPnCNet is trained only on partial inputs.…”
Section: B Learning-based Approachmentioning
confidence: 99%
“…The auto-encoding learns the basic shape of the object and uses it to fool the discriminator by minimising the gap between realscene and artificial data. PointPnCNet [101] uses inpainting, where a portion of the input data is removed and the network trains to complete the point cloud including the missing region. PointPnCNet is trained only on partial inputs.…”
Section: B Learning-based Approachmentioning
confidence: 99%
“…Points are fused in a new dimension. In addition, the attention mechanism is used to weight the processing, making more important parts highlighted, and multiple branches are added to enrich the fusion level [36,37]. The structure is inspired by CC-Net [38].…”
Section: Introductionmentioning
confidence: 99%
“…It expresses the spatial distribution and surface features of the target. However, the sparsity of point clouds has always been a serious problem for radar perception [4,5]. How to make sparse point clouds dense and complete is the focus of research [6,7].…”
Section: Introductionmentioning
confidence: 99%