2019
DOI: 10.48550/arxiv.1911.12736
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

DiversityGAN: Diversity-Aware Vehicle Motion Prediction via Latent Semantic Sampling

Abstract: Vehicle trajectory prediction is crucial for autonomous driving and advanced driver assistant systems. While existing approaches may sample from a predicted distribution of vehicle trajectories, they lack the ability to explore it -a key ability for evaluating safety from a planning and verification perspective. In this work, we devise a novel approach for generating realistic and diverse vehicle trajectories. We extend the generative adversarial network (GAN) framework with a low-dimensional approximate seman… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 6 publications
(8 citation statements)
references
References 42 publications
0
8
0
Order By: Relevance
“…For example [21] defines prediction in this setting as trajectory forecasting, a class of imitation learning with non-interaction. Such approaches usually incorporate probabilistic latent variable priors that capture uncertainty as a set of non-interpretable random variables, or leverage Generative Adversarial Networks (GAN) [27]. However, their major drawback is the reliance on sequential sampling, which accumulates errors in each step.…”
Section: Related Workmentioning
confidence: 99%
“…For example [21] defines prediction in this setting as trajectory forecasting, a class of imitation learning with non-interaction. Such approaches usually incorporate probabilistic latent variable priors that capture uncertainty as a set of non-interpretable random variables, or leverage Generative Adversarial Networks (GAN) [27]. However, their major drawback is the reliance on sequential sampling, which accumulates errors in each step.…”
Section: Related Workmentioning
confidence: 99%
“…Recent developments in learning-based methods of trajectory prediction such as RNNs [9], [10] allow for improved prediction in crowded environments, outperforming parametric based methods such as SFM [11]. These methods have been applied to multimodal prediction by learning semantically meaningful latent representations in conditional variational autoencoders [12], [13] and GANs [14], or through clustering modal paths in output distributions [15]. However, these methods can still fail to outperform even simple baselines such as constant velocity models in many situations [16].…”
Section: Arxiv:200612906v1 [Cscv] 23 Jun 2020mentioning
confidence: 99%
“…This method, as well as a similar GAN based approach proposed by [18], also make use of overhead contextual scene inputs, which are often difficult to capture in autonomous driving systems. Prior work [2], [5], [14] using GANs for trajectory prediction has followed the assumption from GAN application to image synthesis that we cannot efficiently evaluate the output distribution, but can sample from it, requiring multiple iterations to identify the true multimodal distribution. However, our problem's output distribution is much lower-dimensional than image synthesis, and has been modelled previously by GMMs as in [12], [15], [19], allowing a distribution to be generated from a single iteration.…”
Section: Arxiv:200612906v1 [Cscv] 23 Jun 2020mentioning
confidence: 99%
“…Predictions, however, are inherently uncertain, so it is desirable to represent uncertainty in predictions of possible future states and reason about this uncertainty while planning. This desire is motivating ongoing work in the behavior prediction community to go beyond single mean average precision (MAP) prediction and develop methods for generating probabilistic predictions [1]- [4]. In the most general sense, this involves learning joint distributions for the future states of all the agents conditioned on their past trajectories and other context specific variables (e.g.…”
Section: Introductionmentioning
confidence: 99%