2012
DOI: 10.1111/j.1467-8640.2012.00445.x
|View full text |Cite
|
Sign up to set email alerts
|

Goal‐driven Autonomy for Responding to Unexpected Events in Strategy Simulations

Abstract: To operate autonomously in complex environments, an agent must monitor its environment and determine how to respond to new situations. To be considered intelligent, an agent should select actions in pursuit of its goals, and adapt accordingly when its goals need revision. However, most agents assume that their goals are given to them; they cannot recognize when their goals should change. Thus, they have difficulty coping with the complex environments of strategy simulations that are continuous, partially obser… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 40 publications
(24 citation statements)
references
References 30 publications
0
24
0
Order By: Relevance
“…In experiment 4, all the suitable runs that failed in experiments 1-3, plus seven additional runs with forced failures, were given to the planner to explain the failures using diagnostic knowledge. This experiment shows the ability of the system to use a mix of default assumptions and instance assumptions to create 23 See http://openmind.hri-us.com/. This was obtained from humans freely listing object types they associated with given room types.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…In experiment 4, all the suitable runs that failed in experiments 1-3, plus seven additional runs with forced failures, were given to the planner to explain the failures using diagnostic knowledge. This experiment shows the ability of the system to use a mix of default assumptions and instance assumptions to create 23 See http://openmind.hri-us.com/. This was obtained from humans freely listing object types they associated with given room types.…”
Section: Discussionmentioning
confidence: 99%
“…All these frameworks deal with identifying a set of propositions or beliefs that either lead to inconsistencies or permit the robot to deduce an observation, while we have a set of operators that allow us to transform states. The idea of detecting discrepancies between expected and observed state also comes up in the goal-driven autonomy framework of Klenk et al [23], where discrepancies trigger the generation of explanations, which in turn lead to the creation of new goals. There is also a relationship to work in explanation-based learning [30].…”
Section: Discussion Of Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…More recently, Klenk et al (2012) presented ARTUE, which uses a direct application of explanations in a strategy simulation. ARTUE is based on a modified version of the SHOP2 planner, and it tries to explain discrepancies similarly to DiscoverHistory.…”
Section: Related Workmentioning
confidence: 99%
“…This is especially true as development moves into higher levels of cognition and reasoning such as goal-based autonomy [16]; developers should be able to use as many existing capabilities as possible to minimize the additional work required.…”
Section: Introductionmentioning
confidence: 99%