2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00815
|View full text |Cite
|
Sign up to set email alerts
|

PointFlowNet: Learning Representations for Rigid Motion Estimation From Point Clouds

Abstract: Despite significant progress in image-based 3D scene flow estimation, the performance of such approaches has not yet reached the fidelity required by many applications. Simultaneously, these applications are often not restricted to image-based estimation: laser scanners provide a popular alternative to traditional cameras, for example in the context of self-driving cars, as they directly yield a 3D point cloud. In this paper, we propose to estimate 3D motion from such unstructured point clouds using a deep neu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
95
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 125 publications
(95 citation statements)
references
References 42 publications
0
95
0
Order By: Relevance
“…Two possible approaches can be considered. (1) We can compute the salient features and hybrid salient coefficients by minimizing the reconstruction error based on labeled samples within each class, and as the same time minimize the reconstruction error over all classes as a whole to preserve the global structures; (2) We can re-define the initial label matrix , and at the same time consider propagating the label information from labeled data to the unlabeled data. In addition, the optimal determination of dictionary size still remains an open problem and will also be investigated in future work.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Two possible approaches can be considered. (1) We can compute the salient features and hybrid salient coefficients by minimizing the reconstruction error based on labeled samples within each class, and as the same time minimize the reconstruction error over all classes as a whole to preserve the global structures; (2) We can re-define the initial label matrix , and at the same time consider propagating the label information from labeled data to the unlabeled data. In addition, the optimal determination of dictionary size still remains an open problem and will also be investigated in future work.…”
Section: Discussionmentioning
confidence: 99%
“…ITH the increasing complexity of contents, diversity of distribution and high-dimensionality of real data, how to represent data efficiently for subsequent classification or clustering still remains an important research topic [1][2][3][9] [50]. To represent data, some feasible methods can be used, such as sparse representation (SR) by dictionary learning (DL) [4][5][6][7][8], low-rank coding [9][10][15] [38][39] and matrix factorization [11] [12], which are inspired by the fact that high-dimensional data can usually be characterized by applying a low-dimensional or compressed space in which the possible noise and redundant information can be removed in addition to preserving the useful information and important structures.…”
Section: Introductionmentioning
confidence: 99%
“…Concurrent to our work, [3] estimate scene flow as rigid motions of individual objects or background with network that jointly learns to regress ego-motion and detect 3D objects. [23] jointly estimate object rigid motions and segment them based on their motions.…”
Section: Related Workmentioning
confidence: 99%
“…Following the use of LiDAR, learning-based solutions exist to predict matches from point cloud. Some algorithms train neural network models from unstructured LiDAR point clouds [23], [24]. However, training from that domain is a challenging task.…”
Section: Related Workmentioning
confidence: 99%
“…Table I quantifies the results after each fusion step in our LiDAR-Flow pipeline. Since LiDAR-Flow incorporates LiDAR measurements and stereo images into dense scene flow estimation on image domain; [21], [23], [24] can not be conducted to our evaluation. They exploit the full resolution of LiDAR on point cloud domain and they remove LiDAR points on grounds which is inconsistent to our algorithm.…”
Section: A Evaluation Datamentioning
confidence: 99%