Robotics: Science and Systems XII
DOI: 10.15607/rss.2016.xii.029
|View full text |Cite
|
Sign up to set email alerts
|

Planning for Autonomous Cars that Leverage Effects on Human Actions

Abstract: Traditionally, autonomous cars make predictions about other drivers' future trajectories, and plan to stay out of their way. This tends to result in defensive and opaque behaviors. Our key insight is that an autonomous car's actions will actually affect what other cars will do in response, whether the car is aware of it or not. Our thesis is that we can leverage these responses to plan more efficient and communicative behaviors. We model the interaction between an autonomous car and a human driver as a dynamic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
344
0

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 371 publications
(347 citation statements)
references
References 14 publications
3
344
0
Order By: Relevance
“…It is a common pattern to model other vehicles' behavior as expected utility maximizing-i.e., an agent is expected to execute the most beneficial controls (87). Therefore, a reward or utility function needs to be known or learned.…”
Section: Game-theoretic Approachesmentioning
confidence: 99%
“…It is a common pattern to model other vehicles' behavior as expected utility maximizing-i.e., an agent is expected to execute the most beneficial controls (87). Therefore, a reward or utility function needs to be known or learned.…”
Section: Game-theoretic Approachesmentioning
confidence: 99%
“…Approach Overview. Inverse Reinforcement Learning (IRL) [15,19,23] enables us to learn R H through demonstrated trajectories. However, IRL requires the human to show demonstrations of the optimal sequence of actions.…”
Section: Problem Statementmentioning
confidence: 99%
“…Future experiments will focus on more realistic driving scenarios and learning risk preferences of human UAV pilots in cluttered environments. Finally, we plan on studying the game theoretic IRL setting (e.g., [43,48]), where multiple risk-sensitive agents interact.…”
Section: Resultsmentioning
confidence: 99%
“…In order to realize this vision, robots must be able to (1) accurately predict the actions of humans in their environment, (2) quickly learn the preferences of human agents in their proximity and act accordingly, and (3) learn how to accomplish new tasks from human demonstrations. Inverse Reinforcement Learning (IRL) [41,32,2,29,38,50,16] is a powerful and flexible framework for tackling these challenges and has been previously used for tasks such as modeling and mimicking human driver behavior [1,28,43], pedestrian trajectory prediction [51,31], and legged robot locomotion [52,27,35]. The underlying assumption behind IRL is that humans act optimally with respect to an (unknown) cost function.…”
Section: Introductionmentioning
confidence: 99%