2018 International Joint Conference on Neural Networks (IJCNN) 2018
DOI: 10.1109/ijcnn.2018.8489653
|View full text |Cite
|
Sign up to set email alerts
|

Goal Recognition in Latent Space

Abstract: Approaches to goal recognition have progressively relaxed the requirements about the amount of domain knowledge and available observations, yielding accurate and efficient algorithms capable of recognizing goals. However, to recognize goals in raw data, recent approaches require either human engineered domain knowledge, or samples of behavior that account for almost all actions being observed to infer possible goals. This is clearly too strong a requirement for real-world applications of goal recognition, and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
42
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
2

Relationship

2
7

Authors

Journals

citations
Cited by 37 publications
(42 citation statements)
references
References 13 publications
0
42
0
Order By: Relevance
“…Some examples of existing systems approaching the whole problem in a unified way are those relying on hierarchical probabilistic models (e.g., Saria and Mahadevan, 2004 ). Another direction that could lead to successful unified solutions could be to find suitable ways of combining logic-based and deep learning approaches in a way that the advantages of both worlds can be exploited, similar to what was done with logic-based and probabilistic approaches (some work in this direction has already been done, e.g., see Amado et al, 2018b ). These hybrid systems could be very good at dealing with sensory data while at the same time being very expressive at describing complete and complex plans.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Some examples of existing systems approaching the whole problem in a unified way are those relying on hierarchical probabilistic models (e.g., Saria and Mahadevan, 2004 ). Another direction that could lead to successful unified solutions could be to find suitable ways of combining logic-based and deep learning approaches in a way that the advantages of both worlds can be exploited, similar to what was done with logic-based and probabilistic approaches (some work in this direction has already been done, e.g., see Amado et al, 2018b ). These hybrid systems could be very good at dealing with sensory data while at the same time being very expressive at describing complete and complex plans.…”
Section: Discussionmentioning
confidence: 99%
“…In the case of activity recognition, recurrent neural networks have demonstrated to be very useful at classifying activities that are short in duration but have a natural ordering, thanks to their ability to take the context into account (Hammerla et al, 2016 ). Amado et al ( 2018a ) proposed the use of long short-term memory networks for a goal recognition task dealing with sensory input data, requiring much less manual introduction of domain knowledge than other state-of-the-art goal recognition approaches. Ordóñez and Roggen ( 2016 ) combined convolutional neural networks with long short-term memory networks for the task of activity recognition.…”
Section: Approaches To the Problem Of Activity Plan And Goal Recognitionmentioning
confidence: 99%
“…Pereira et al (2019) combine deep learning with planning techniques to recognize goals with continuous action spaces. Amado et al (2018) also use deep learning in an unsupervised fashion to lessen the need for domain expertise in goal recognition approaches; Polyvyanyy et al (2020) take a similar approach, but using process mining techniques. To learn these models, existing data of agents' behaviors is required to learn these models.…”
Section: Learned Goal Recognitionmentioning
confidence: 99%
“…We then decode the recognized goal, obtaining its image representation using the decoder. We illustrate this process in Figure 1(c), and detail the process in Amado et al 2018.…”
Section: Goal Recognition In Latent Spacementioning
confidence: 99%