2020 International Conference on 3D Vision (3DV) 2020
DOI: 10.1109/3dv50981.2020.00025
|View full text |Cite
|
Sign up to set email alerts
|

Self-Supervised Learning of Non-Rigid Residual Flow and Ego-Motion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
37
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 46 publications
(37 citation statements)
references
References 19 publications
0
37
0
Order By: Relevance
“…Therefore, self-supervised learning of scene flow has important research values for 3D scene perception. Some recent works (Wu et al, 2020;Mittal et al, 2020;Pontes et al, 2020;Tishchenko et al, 2020) have been done on unsupervised learning of scene flow. PointPWC-Net (Wu et al, 2020) introduces three self-supervised losses including Chamfer loss, smoothness constraint loss, and Laplacian regularization loss in their framework for scene flow learning.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, self-supervised learning of scene flow has important research values for 3D scene perception. Some recent works (Wu et al, 2020;Mittal et al, 2020;Pontes et al, 2020;Tishchenko et al, 2020) have been done on unsupervised learning of scene flow. PointPWC-Net (Wu et al, 2020) introduces three self-supervised losses including Chamfer loss, smoothness constraint loss, and Laplacian regularization loss in their framework for scene flow learning.…”
Section: Related Workmentioning
confidence: 99%
“…Pontes et al (Pontes et al, 2020) constrain non-rigid motion flow using graph Laplacian of raw point cloud, which embeds the topology of the point cloud to capture context information. Tishchenko et al (Tishchenko et al, 2020) divide the self-supervised learning of scene flow into two steps: ego-motion flow is calculated based on the assumption that the LiDAR is moving and the scene is stationary, and then non-rigid flow is calculated based on the assumption that the LIDAR is stationary and the scene is moving.…”
Section: Related Workmentioning
confidence: 99%
“…In particular, applications such as self-driving and robot navigation rely upon a robust perception of dynamically changing 3D scenes. To equip autonomous agents with the ability to infer spatiotemporal geometric properties, there has recently been an increased interest in 3D scene flow as a form of lowlevel dynamic scene representation [37,67,73,51,49,54]. Scene flow is the 3D motion field of points in the scene [69] and is a generalization of 2D optical flow.…”
Section: Introductionmentioning
confidence: 99%
“…As a result, many methods have resorted to training on simulated data [41,73,51], yet this comes at the price of a non-negligible domain gap. Other methods have attempted to solve the problem in a completely unsupervised manner [67,75,44], however they fail to provide competitive performance. In Fig.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation