2021 IEEE Intelligent Vehicles Symposium (IV) 2021
DOI: 10.1109/iv48863.2021.9575718
|View full text |Cite
|
Sign up to set email alerts
|

MultiXNet: Multiclass Multistage Multimodal Motion Prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
29
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 25 publications
(29 citation statements)
references
References 26 publications
0
29
0
Order By: Relevance
“…In order to take the multimodality into account, multiple trajectories can be predicted for an actor [10], [11], [12]. When the uncertainty of the prediction is considered, a spatial probability distribution is provided at each of the given timepoints independently [9], [13]. The mathematical details can also be found in the following section.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…In order to take the multimodality into account, multiple trajectories can be predicted for an actor [10], [11], [12]. When the uncertainty of the prediction is considered, a spatial probability distribution is provided at each of the given timepoints independently [9], [13]. The mathematical details can also be found in the following section.…”
Section: Related Workmentioning
confidence: 99%
“…Next, we study the proposed representation in supervised trajectory prediction tasks by replacing the waypoint representation with the polynomial representation using (3), and compare prediction performances using the different representations. We adapt MultiXNet [9], which is a deep model with competitive performance designed to detect traffic actors around a SDV and predict their future trajectories.…”
Section: Applying the Representation In Supervised Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…Prior works in the field of sensor fusion have mostly focused on the perception aspect of driving, e.g. 2D and 3D object detection [22,12,66,9,44,31,34,61,33,37], motion forecasting [22,36,5,35,63,6,19,38,32,9], and depth estimation [24,60,61,33]. These methods focus on learning a state representation that captures the geometric and semantic information of the 3D scene.…”
Section: Introductionmentioning
confidence: 99%