2020
DOI: 10.48550/arxiv.2004.11647
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Any Motion Detector: Learning Class-agnostic Scene Dynamics from a Sequence of LiDAR Point Clouds

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
2
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 0 publications
1
2
0
Order By: Relevance
“…We observe that mean L 2 error increases substantially when ego motion is not compensated for across all object types and across moving and stationary objects. This is also consistent with previous works [16]. We also ran a similar experiment where the model consumes non ego motion compensated point clouds, but instead subtracts ego motion from the predicted flow during training and evaluation.…”
Section: Discussionsupporting
confidence: 86%
See 2 more Smart Citations
“…We observe that mean L 2 error increases substantially when ego motion is not compensated for across all object types and across moving and stationary objects. This is also consistent with previous works [16]. We also ran a similar experiment where the model consumes non ego motion compensated point clouds, but instead subtracts ego motion from the predicted flow during training and evaluation.…”
Section: Discussionsupporting
confidence: 86%
“…We argue that this is more realistic for AV applications in which ego motion is available from IMU/GPS sensors [49]. Furthermore, having a consistent coordinate frame for both input frames lessens the burden on a model to correspond moving objects between frames [16] as explored in Appendix B.…”
Section: Rigid Body Assumptionmentioning
confidence: 99%
See 1 more Smart Citation