2018
DOI: 10.1109/tits.2017.2706963
|View full text |Cite
|
Sign up to set email alerts
|

Capturing Car-Following Behaviors by Deep Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
140
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 252 publications
(142 citation statements)
references
References 55 publications
1
140
1
Order By: Relevance
“…Chong et al (30) illustrated that it is possible to predict acceleration accurately using neural networks with only one hidden layer, and Khodayari et al (31) instantaneous reaction time (RT) delay. In contrast to these conventional neural network-based models, recent studies have begun to model car-following behavior with recurrent neural networks (RNN) (32), and with more than one hidden layer, i.e., deep neural networks (8).…”
Section: Data-driven Car-following Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…Chong et al (30) illustrated that it is possible to predict acceleration accurately using neural networks with only one hidden layer, and Khodayari et al (31) instantaneous reaction time (RT) delay. In contrast to these conventional neural network-based models, recent studies have begun to model car-following behavior with recurrent neural networks (RNN) (32), and with more than one hidden layer, i.e., deep neural networks (8).…”
Section: Data-driven Car-following Modelsmentioning
confidence: 99%
“…The critic network with weights θ Q approximates the action-value function Q(s,a|θ Q ). The actor network with weights θ µ explicitly 8 represents the agent's current policy µ(s|θ µ ) for the Q-function, which maps from a state of the environment s to an action a. To enable stable and robust learning, DDPG deploys experience replay and target network, as in DQN:…”
Section: Deep Deterministic Policy Gradientmentioning
confidence: 99%
“…RNNs have been widely used to capture temporal dependency in sequential data [35], [36]. Representative models include GRUs [37] and long short term memory (LSTM) [38], and they have been adopted successfully to various tasks such as video representation [39], image captioning [40] and carfollowing modeling [41]. LSTM and GRUs, typically using fully-connected layers, do not maintain spatial information and require a lot of network parameters.…”
Section: B Recurrent Modelsmentioning
confidence: 99%
“…To solve this problem, various modifications have been made on RNNs; the most effective method is to introduce various gating mechanisms such as the long short-term memory (LSTM) [18,19] and the gated recurrent unit (GRU) [20] networks. Based on the previously described studies, Wang et al [21] proposed the use of GRU to model CF behaviour and embed the driver's memory effect in the model, which used the speed of the following car, the relative speed of the two cars, and the distance between the two cars observed in the last several time intervals as inputs and the estimated speed of the following car at the next time point as the output. e test results showed that the proposed model has higher simulation accuracy than the existing CF models and provides a new concept for the study of traffic flow theory and simulation.…”
Section: Introductionmentioning
confidence: 99%