2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.01658
|View full text |Cite
|
Sign up to set email alerts
|

BE-STI: Spatial-Temporal Integrated Network for Class-agnostic Motion Prediction with Bidirectional Enhancement

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
22
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 15 publications
(22 citation statements)
references
References 34 publications
0
22
0
Order By: Relevance
“…Motion prediction aims to estimate the future positions of objects based on past observations. Given consecutive point clouds from past frames, some works [28], [29], [30], [31], [58], [59], [60] propose to convert the point clouds into bird's eye view (BEV) maps and estimate a future motion field from these BEV maps. MotionNet [28] learns to simultaneously estimate both semantic information and future motion in a supervised manner.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Motion prediction aims to estimate the future positions of objects based on past observations. Given consecutive point clouds from past frames, some works [28], [29], [30], [31], [58], [59], [60] propose to convert the point clouds into bird's eye view (BEV) maps and estimate a future motion field from these BEV maps. MotionNet [28] learns to simultaneously estimate both semantic information and future motion in a supervised manner.…”
Section: Related Workmentioning
confidence: 99%
“…We conduct motion prediction experiments on nuScenes [73]. Following previous works [28], [29], [30], [31], we divide the dataset into three parts: 500 scenes for training, 100 for validation, and 250 for test. During the validation and testing phases, the ground truth motion data is derived from the detection and tracking annotations provided by nuScenes.…”
Section: Application On Self-supervised Class-agnostic Motion Predictionmentioning
confidence: 99%
See 2 more Smart Citations
“…However, these approaches may face challenges when handling categories that have not been seen in the training set, mainly due to their reliance on object detection (Wu, Chen, and Metaxas 2020). To address this challenge, class-agnostic motion prediction task (Schreiber, Hoermann, and Dietmayer 2019;Wu, Chen, and Metaxas 2020;Wang et al 2022;Wei et al 2022) is proposed to provide complementary information. These methods take a sequence of previous point clouds as input and predict the future displacements for each Bird's Eye View (BEV) cell.…”
Section: Introductionmentioning
confidence: 99%