Robotics: Science and Systems XVI 2020
DOI: 10.15607/rss.2020.xvi.004
|View full text |Cite
|
Sign up to set email alerts
|

Elaborating on Learned Demonstrations with Temporal Logic Specifications

Abstract: Most current methods for learning from demonstrations assume that those demonstrations alone are sufficient to learn the underlying task. This is often untrue, especially if extra safety specifications exist which were not present in the original demonstrations. In this paper, we allow an expert to elaborate on their original demonstration with additional specification information using linear temporal logic (LTL). Our system converts LTL specifications into a differentiable loss. This loss is then used to lea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
28
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 22 publications
(29 citation statements)
references
References 17 publications
1
28
0
Order By: Relevance
“…Compared with most existing works, the use of TLTL not only enables the encoding of complex tasks that involve a sequence of logically organized action plans, but also provides a convenient and effective means to design the cost function. Close to our work, LTL specifications was also incorporated in the learning of DMPs in [32]. However, the loss function designed in [32] is limited in evaluating whether or not a given LTL specification is satisfied and the log-sum-exponential approximation is over-approximated.…”
Section: Contributionsmentioning
confidence: 99%
See 1 more Smart Citation
“…Compared with most existing works, the use of TLTL not only enables the encoding of complex tasks that involve a sequence of logically organized action plans, but also provides a convenient and effective means to design the cost function. Close to our work, LTL specifications was also incorporated in the learning of DMPs in [32]. However, the loss function designed in [32] is limited in evaluating whether or not a given LTL specification is satisfied and the log-sum-exponential approximation is over-approximated.…”
Section: Contributionsmentioning
confidence: 99%
“…Close to our work, LTL specifications was also incorporated in the learning of DMPs in [32]. However, the loss function designed in [32] is limited in evaluating whether or not a given LTL specification is satisfied and the log-sum-exponential approximation is over-approximated. In contrast, the weighted TLTL robustness in this work is sound that can not only qualitatively evaluate the satisfaction of LTL specifications, but also quantitively determine its satisfaction degree.…”
Section: Contributionsmentioning
confidence: 99%
“…Existing attempts to extend neural models to handle these types of constraints are often made in a post-hoc fashion (eg. action clipping [14], elaboration using auxiliary losses [15]).…”
Section: Related Workmentioning
confidence: 99%
“…In contrast to LTL which is defined over atomic propositions (i.e., discrete states), STL is defined over continuous real-valued signals and encompasses a notion of robustness, a scalar measuring the degree of specification satisfaction/violation. Accordingly, there has been a growing interest in using STL robustness in gradient-based methods for controller synthesis (e.g., [7], [8], [9], [10]). Recently, stlcg [11], a toolbox leveraging Pytorch [12] to compute STL robustness, was developed.…”
Section: Introductionmentioning
confidence: 99%