Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence 2022
DOI: 10.24963/ijcai.2022/646
|View full text |Cite
|
Sign up to set email alerts
|

On the Computational Complexity of Model Reconciliations

Abstract: A longstanding objective in classical planning is to synthesize policies that generalize across multiple problems from the same domain. In this work, we study generalized policy search-based methods with a focus on the score function used to guide the search over policies. We demonstrate limitations of two score functions --- policy evaluation and plan comparison --- and propose a new approach that overcomes these limitations. The main idea behind our approach, Policy-Guided Planning for Generalized Policy Gen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…Many vital tasks are centered on plan optimality verification, e.g., the task of model reconciliation, of plan post-optimization, and of domain learning. The first one is to change a planning problem's domain with the least number of changes so as to turn a plan into an optimal solution, which is Σ p 2 -complete (Sreedharan, Bercher, and Kambhampati 2022). The second one is concerned with whether a plan can be further optimized by removing some redundant actions from it, which is NP-complete in both classical planning (Fink and Yang 1992;Nakhost and Müller 2010) and POCL planning (Olz and Bercher 2019).…”
Section: Verification Of Plan Optimalitymentioning
confidence: 99%
“…Many vital tasks are centered on plan optimality verification, e.g., the task of model reconciliation, of plan post-optimization, and of domain learning. The first one is to change a planning problem's domain with the least number of changes so as to turn a plan into an optimal solution, which is Σ p 2 -complete (Sreedharan, Bercher, and Kambhampati 2022). The second one is concerned with whether a plan can be further optimized by removing some redundant actions from it, which is NP-complete in both classical planning (Fink and Yang 1992;Nakhost and Müller 2010) and POCL planning (Olz and Bercher 2019).…”
Section: Verification Of Plan Optimalitymentioning
confidence: 99%
“…Domain learning with expert knowledge is related to both domain modification (Lin and Bercher 2021) and model reconciliation (Sreedharan, Bercher, and Kambhampati 2022), which has the same setting as domain modification but additionally requires that the given plan is optimal in the modified domain. The constraint that the domain to be found can have at most k modifications from the base domain can also be expressed as a propositional logical formula.…”
Section: Domain Learning With Expert Knowledgementioning
confidence: 99%
“…To our best knowledge only a few approaches exist that provide further AI-based support. Lindsay et al (2020), for example, refined an inaccurate hybrid domain to capture the environment more accurately, and Sreedharan et al (2020) revised a dialogue domain via model reconciliation (Sreedharan, Chakraborti, and Kambhampati 2021; Sreedharan, Bercher, and Kambhampati 2022). Lin and Bercher (2021) also studied the complexity of finding corrections to a flawed domain model provided a plan that shall be a solution but currently is not.…”
Section: Introductionmentioning
confidence: 99%