2020 IEEE Intelligent Vehicles Symposium (IV) 2020
DOI: 10.1109/iv47402.2020.9304819
|View full text |Cite
|
Sign up to set email alerts
|

Sense–Assess–eXplain (SAX): Building Trust in Autonomous Vehicles in Challenging Real-World Driving Scenarios

Abstract: This paper discusses ongoing work in demonstrating research in mobile autonomy in challenging driving scenarios. In our approach, we address fundamental technical issues to overcome critical barriers to assurance and regulation for largescale deployments of autonomous systems. To this end, we present how we build robots that (1) can robustly sense and interpret their environment using traditional as well as unconventional sensors;(2) can assess their own capabilities; and (3), vitally in the purpose of assuran… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
1

Relationship

4
2

Authors

Journals

citations
Cited by 13 publications
(8 citation statements)
references
References 38 publications
0
8
0
Order By: Relevance
“…The Sense Access eXplain (SAX) project 4 in the Oxford Robotics Institute (ORI) is currently collecting driving data which includes some proprioception information from the CAN bus of a Jaguar Land Rover (JLR) ego vehicle [188]. Some of the relevant CAN bus data include wheel angle, yaw, acceleration, and braking, among others.…”
Section: A Standards and Regulationsmentioning
confidence: 99%
“…The Sense Access eXplain (SAX) project 4 in the Oxford Robotics Institute (ORI) is currently collecting driving data which includes some proprioception information from the CAN bus of a Jaguar Land Rover (JLR) ego vehicle [188]. Some of the relevant CAN bus data include wheel angle, yaw, acceleration, and braking, among others.…”
Section: A Standards and Regulationsmentioning
confidence: 99%
“…The use of scene graphs allows for explainable intermediate representation of driving scenes. As a further step, interpretbility and intelligibility need to be considered all through the learning and prediction process in order to enhance transparency and accountability [16]. Hence, we apply interpretable models (tree-based) with high intelligibility (natural language explanation) in risk prediction and classification tasks.…”
Section: B Explanations In Risk Assessmentmentioning
confidence: 99%
“…Further, to the extent that our predictions are structured around the interpretation of observed trajectories in terms of high-level maneuvers, the goal recognition process lends itself to intuitive interpretation for the purposes of system analysis and debugging, at a level of detail suggested in Figure 2. As we develop towards making our autonomous systems more trustworthy [21], these notions of interpretation and the ability to justify (explain) the system's decisions are key [22].…”
Section: Introductionmentioning
confidence: 99%