2020
DOI: 10.1007/978-3-030-58589-1_33
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic and Static Context-Aware LSTM for Multi-agent Motion Prediction

Abstract: Multi-agent motion prediction is challenging because it aims to foresee the future trajectories of multiple agents (e.g. pedestrians) simultaneously in a complicated scene. Existing work addressed this challenge by either learning social spatial interactions represented by the positions of a group of pedestrians, while ignoring their temporal coherence (i.e. dependencies between different long trajectories), or by understanding the complicated scene layout (e.g. scene segmentation) to ensure safe navigation. H… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
3

Relationship

1
9

Authors

Journals

citations
Cited by 44 publications
(17 citation statements)
references
References 36 publications
0
17
0
Order By: Relevance
“…[57], [58] propose to input the relative position and relative velocity of k nearest neighbours directly to a Multi-Layer Perceptron (MLP) to obtain the interaction vector. Many works [63]- [77] propose interaction module designs based on attention mechanisms [78], [79] to identify the neighbours which affect the trajectory of the person of interest. The attention weights are either learned or handcrafted based on domain knowledge (e.g., euclidean distance).…”
Section: Related Workmentioning
confidence: 99%
“…[57], [58] propose to input the relative position and relative velocity of k nearest neighbours directly to a Multi-Layer Perceptron (MLP) to obtain the interaction vector. Many works [63]- [77] propose interaction module designs based on attention mechanisms [78], [79] to identify the neighbours which affect the trajectory of the person of interest. The attention weights are either learned or handcrafted based on domain knowledge (e.g., euclidean distance).…”
Section: Related Workmentioning
confidence: 99%
“…It uses a special Structural RNN (S-RNN) to calculate the weight of the spatial-temporal graph data. It takes the problem content as the node and the time series data as the edge value (iv) DSCMP [28]. It designs a queue mechanism to explicitly memorize and learn the correlation among long trajectories.…”
Section: Performance Evaluationmentioning
confidence: 99%
“…Other LSTM networks focus on attention mechanisms in the interaction module to capture the relative importance of each person in the scene. In some of these works, the attention weights are learned by data or by added handcrafted based on domain knowledge [176], [190], [191], [192]. Another relevant work is STGAT by Huang et al [191] that uses graph neural networks (GNN) instead of a pooling module.…”
Section: A Long Short-term Memory Networkmentioning
confidence: 99%