2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.01288
|View full text |Cite
|
Sign up to set email alerts
|

SLIM: Self-Supervised LiDAR Scene Flow and Motion Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
27
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 41 publications
(36 citation statements)
references
References 43 publications
0
27
0
Order By: Relevance
“…SLIM [BEM*21] removes the annotation requirement constraint on realistic data by integrating the self‐supervised scene flow estimation and the motion segmentation framework. SLIM presents that the motion segmentation signal can be generated by detecting the discrepancy between raw flow predictions and rigid ego‐motion.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…SLIM [BEM*21] removes the annotation requirement constraint on realistic data by integrating the self‐supervised scene flow estimation and the motion segmentation framework. SLIM presents that the motion segmentation signal can be generated by detecting the discrepancy between raw flow predictions and rigid ego‐motion.…”
Section: Methodsmentioning
confidence: 99%
“…From pedestrians walking at a constant speed to high‐speed vehicles, the issue of detecting objects of interest can be addressed by segmenting the underlying motions. Intuitively, the segmentation of different motion fields is conducted through classifying the point cloud into moving bodies and stationary backgrounds [BEM*21]. Discontinuities in the scene flow are key cues for segmenting a point cloud into several individual objects with different motion fields.…”
Section: Applicationsmentioning
confidence: 99%
See 1 more Smart Citation
“…Scene flow estimation [1] aims to generate a 3D motion field of a dynamic scene. As a fundamental representation of dynamics, scene flow can be applied in various tasks, such as motion segmentation [2], 3d object detection [3], and point cloud accumulation [4], as well as multiple downstream applications including robotics and autonomous driving [5], [6]. In recent years, with the widespread application of 3D sensors and the rise of deep learning techniques for point cloud processing, learning scene flow directly from 3D point clouds has attracted increasing research attention.…”
Section: Introductionmentioning
confidence: 99%
“…the self-supervised setting, in most previous approaches [2], [11], [12], [17], [18], [19], [20], [21], [22], [23], models estimate scene flow between two point clouds, and then the estimated scene flow is used to warp the source point cloud to match the target one. The main supervision signal is obtained by minimizing the discrepancy between the warped point cloud and the target point cloud, that is, by minimizing the distance between corresponding points in the two point clouds.…”
Section: Introductionmentioning
confidence: 99%