2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2019
DOI: 10.1109/iros40897.2019.8967676
|View full text |Cite
|
Sign up to set email alerts
|

Belief Space Metareasoning for Exception Recovery

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
2

Relationship

3
6

Authors

Journals

citations
Cited by 20 publications
(9 citation statements)
references
References 8 publications
0
9
0
Order By: Relevance
“…Other planning domains have also adopted meta-reasoning, including algorithm selection in sorting [24] and belief space meta-reasoning in autonomous driving applications [25].…”
Section: Related Workmentioning
confidence: 99%
“…Other planning domains have also adopted meta-reasoning, including algorithm selection in sorting [24] and belief space meta-reasoning in autonomous driving applications [25].…”
Section: Related Workmentioning
confidence: 99%
“…An SSP is a general-purpose model for sequential decision making in stochastic domains with an objective of finding the least-cost path from a start state to a goal state. This model has been used in a wide range of applications, including exception recovery [37], electric vehicle charging [32], search and rescue [27], and autonomous navigation [45].…”
Section: Domain Modelmentioning
confidence: 99%
“…As noted earlier, we focus on the CAS illustrated in Figure 1, which represents a class of CAS with four distinct levels of autonomy and four feedback signals. Recent work on autonomous vehicles [28] and autonomous mobile robots [22] suggests that this class of CAS represents a wide range of autonomous systems. In Figure 1, the policy π produces an action a and a level l for every state s. l dictates the manner in which the system carries out the action a, and the autonomy profile κ restricts the levels the π can return.…”
Section: Sample Casmentioning
confidence: 99%
“…Hence, an initial level of competence could be determined during testing and evaluation, but adjustments must be made when the system is deployed. Even when developers aim to err on the side of caution and define a lower level of autonomy as the default, it is also possible to unintentionally infer from initial testing that the system is more competent than it really is [18,22]. Therefore, developing mechanisms to explicitly represent, reason about, and adjust the level of autonomy is an important challenge in artificial intelligence.…”
Section: Introductionmentioning
confidence: 99%