2020
DOI: 10.1109/access.2020.3032780
|View full text |Cite
|
Sign up to set email alerts
|

A Mental Simulation Approach for Learning Neural-Network Predictive Control (in Self-Driving Cars)

Abstract: This paper presents a novel approach to learning predictive motor control via "mental simulations". The method, inspired by learning via mental imagery in natural Cognition, develops in two phases: first, the learning of predictive models based on data recorded in the interaction with the environment; then, at a deferred time, the synthesis of inverse models via offline episodic simulations. Parallelism with human-engineered control-theoretic workflow (mathematical modeling the direct dynamics followed by opti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8

Relationship

2
6

Authors

Journals

citations
Cited by 21 publications
(17 citation statements)
references
References 46 publications
0
17
0
Order By: Relevance
“…For practical reasons, we reuse the self-driving agent of the Dreams4Cars project. In this work, we describe the novel interaction mechanics and the enabling elements (Section III), but not a comprehensive description of the rest of the agent, which was published in [3] (agent architecture), [4] (offline learning via mental simulations) and [5] (learning cautious behaviors).…”
Section: A What This Paper Is (And Is Not) Aboutmentioning
confidence: 99%
See 1 more Smart Citation
“…For practical reasons, we reuse the self-driving agent of the Dreams4Cars project. In this work, we describe the novel interaction mechanics and the enabling elements (Section III), but not a comprehensive description of the rest of the agent, which was published in [3] (agent architecture), [4] (offline learning via mental simulations) and [5] (learning cautious behaviors).…”
Section: A What This Paper Is (And Is Not) Aboutmentioning
confidence: 99%
“…The dynamics of a vehicle are in part stochastic because of external disturbances: an action u = {j(t), r(t)} may generate a family of trajectories γ. The stochastic vehicle response {j(t), r(t)} → γ is specified by a probabilistic motion model (in our case, probabilistic motion models were learned with a technique similar to [4]). We hence begin with a mapping u → γ from a generic action u = {j(t), r(t)} to the distribution of generated trajectories γ.…”
Section: B Action Primingmentioning
confidence: 99%
“…In our work, the driver agent is [2], but other realizations may also work, and we release an open access implementation of this work. The agent can drive [3], i.e., it is capable of high-level motor planning and low-level control. Specifically, it predicts the other road users' (pedestrians) trajectories with a mirroring mechanism (see also [4,Section IV.A and Section V.A]) and maneuvers accordingly to avoid collisions.…”
Section: B Driver Agentmentioning
confidence: 99%
“…1) when the RB common and RB rare files are ready to be read, i.e. the DQL middleware is not writing on the files, the DQL core loads the buffers and creates a batch of training data by randomly sampling the two buffers, taking only 5% of the data from the RB rare; 2) the DQL core updates the weights of Q andQ in equation ( 4) using, respectively, the ADAM optimization algorithm and the Polyak averaging (5), and it stores them in the network file; 3) the DQL core updates the value of ε using the rule in equation (3), and it stores it in the network file; 4) the training stops if it reaches the maximum number of epochs, otherwise it restarts from step 1). The simulation process produces the datasets (the pseudo-code of the algorithm is presented in the supplementary materials Alg.…”
Section: A the Training Proceduresmentioning
confidence: 99%
“…Most of these studies are based on typical mathematical and control modeling algorithms to ensure smooth car-following such that an autonomous vehicle, defined as the follower, keeps following another vehicle, defined as the leader, while maintaining safety distances [9]- [11]. Recently, few studies have promoted the use of Artificial Intelligence in designing car-following models [12], [13]. Most of them resorted to use Reinforcement Learning (RL) methods to determine navigation decisions for the follower vehicle and hence, design their car-following models based on numerical inputs of the vehicle dynamics, e.g., the lateral position, the speed, and the yaw angle.…”
Section: Introductionmentioning
confidence: 99%