2019 IEEE Intelligent Transportation Systems Conference (ITSC) 2019
DOI: 10.1109/itsc.2019.8917085
|View full text |Cite
|
Sign up to set email alerts
|

Machine learning method to ensure robust decision-making of AVs

Abstract: Replacing the human driver to perform the Dynamic Driving Task (DDT)[1] will require perception, complex analysis and assessment of traffic situation. The path leading to success the deployment of fully Autonomous Vehicle (AV) depends on the resolution of a lot of challenges. Both the safety and the security aspects of AV constitute the core of regulatory compliance and technical research. The Autonomous Driving System (ADS) should be designed to ensure a safe manoeuvre and a stable behaviour despite the techn… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 17 publications
0
5
0
Order By: Relevance
“…Until significant advances in deep learning technology, traditional methods dominated research in the field of robotic decision control. These methods, including multilayer perceptrons, support vector machines, Bayesian networks, and AdaBoost, are widely used to solve relatively small-scale decision-control problems such as simple autonomous navigation [35], vehicle lane change decisions [36], and collision risk assessment [37]. In recent years, with significant advances in big data, deep learning models, and computational power, learning-based methods have attracted widespread research interest in the domain of robot decision control.…”
Section: B Robotic Decisionmentioning
confidence: 99%
“…Until significant advances in deep learning technology, traditional methods dominated research in the field of robotic decision control. These methods, including multilayer perceptrons, support vector machines, Bayesian networks, and AdaBoost, are widely used to solve relatively small-scale decision-control problems such as simple autonomous navigation [35], vehicle lane change decisions [36], and collision risk assessment [37]. In recent years, with significant advances in big data, deep learning models, and computational power, learning-based methods have attracted widespread research interest in the domain of robot decision control.…”
Section: B Robotic Decisionmentioning
confidence: 99%
“…where ∆ l,1,min , ∆ l,1,max , ∆ l,2,min , ∆ l,2,max are the boundaries of Λ l,1 , Λ l,2 . The formulations (6)- (7) show that y ref , v ref must be inside of a limited neighbourhood of the safe trajectory y i+1,s , v x,i+1,s . In practice, it is suggested to select |∆ l,i,max | = |∆ l,i,min | = ∆ l,i,m , i = {1; 2}, which leads to symmetric domains.…”
Section: Robust Control Design Of the Autonomous Overtaking Strategymentioning
confidence: 99%
“…A reinforcement-learning-based overtaking control strategy is proposed in [5], [6]. In [7] a Q-learning strategy is used in the design of driving algorithms for multi-lane environments. An analysis method of the robust properties of the machine-learning-based overtaking decision strategies is found in [7].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In addition, machine learning algorithms are used to realize autonomous driving. Tami et al [32], for example, uses this to make decisions for lane changes. However, the presented methods mostly refer to short-term planning horizons, which are not sufficient for energy-efficient driving and, therefore, do not include eco-driving.…”
Section: Introductionmentioning
confidence: 99%