“…Game-theoretic planning: Traditionally, multi-agent planning and game theory approaches explicitly model multiple agents' policies or internal states, usually by generalizing Markov decision process (MDP) to multiple decisions makers [5,33]. These frameworks facilitate reasoning about collaboration strategies, but suffer from "state space explosion" intractability except when interactions are known to be sparse [24] or hierarchically decomposable [11]. Multi-agent Forecasting: Data-driven approaches have been applied to forecast complex interactions between multiple pedestrians [1,3,10,14,21], vehicles [6,19,26], and athletes [9,18,20,32,34,35].…”