2022
DOI: 10.1109/lra.2022.3191068
|View full text |Cite
|
Sign up to set email alerts
|

Approximate Task Tree Retrieval in a Knowledge Network for Robotic Cooking

Abstract: Task planning for robotic cooking involves generating a sequence of actions for a robot to prepare a meal successfully. This paper introduces a novel task tree generation pipeline producing correct planning and efficient execution for cooking tasks. Our method first uses a large language model (LLM) to retrieve recipe instructions and then utilizes a fine-tuned GPT-3 to convert them into a task tree, capturing sequential and parallel dependencies among subtasks. The pipeline then mitigates the uncertainty and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 33 publications
0
4
0
Order By: Relevance
“…Language translation in the context of LLMs and planning involves transforming natural language instructions into structured planning languages (Wong et al 2023;Kelly et al 2023;Pan et al 2023;Xie et al 2023;Yang, Ishay, and Lee 2023;Lin et al 2023c;Sakib and Sun 2023;Yang et al 2023b;Parakh et al 2023;Yang et al 2023a;Dai et al 2023;Ding et al 2023b;Zelikman et al 2023;Xu et al 2023b;Chen et al 2023a;You et al 2023) such as PDDL, and vice versa, utilizing in-context learning techniques (Guan et al 2023). This capability effectively bridges the gap between human linguistic expression and machine-understandable formats, enhancing intuitive and efficient planning processes.…”
Section: Language Translationmentioning
confidence: 99%
See 1 more Smart Citation
“…Language translation in the context of LLMs and planning involves transforming natural language instructions into structured planning languages (Wong et al 2023;Kelly et al 2023;Pan et al 2023;Xie et al 2023;Yang, Ishay, and Lee 2023;Lin et al 2023c;Sakib and Sun 2023;Yang et al 2023b;Parakh et al 2023;Yang et al 2023a;Dai et al 2023;Ding et al 2023b;Zelikman et al 2023;Xu et al 2023b;Chen et al 2023a;You et al 2023) such as PDDL, and vice versa, utilizing in-context learning techniques (Guan et al 2023). This capability effectively bridges the gap between human linguistic expression and machine-understandable formats, enhancing intuitive and efficient planning processes.…”
Section: Language Translationmentioning
confidence: 99%
“…Efforts to generate multimodal, text, and image-based goalconditioned plans are exemplified by (Lu et al 2023b). Additionally, a subset of studies in this survey investigates the fine-tuning of seq2seq, code-based language models (Pallagani et al 2022(Pallagani et al , 2023b, which are noted for their advanced Application of LLMs in Planning Language Translation ( 23) Xie et al 2023;Guan et al 2023;Chalvatzaki et al 2023;Yang, Ishay, and Lee 2023;Wong et al 2023;Kelly et al 2023;Lin et al 2023c;Sakib and Sun 2023;Yang et al 2023b;Parakh et al 2023;Dai et al 2023;Yang et al 2023a;Shirai et al 2023;Ding et al 2023b;Zelikman et al 2023;Pan et al 2023;Xu et al 2023b;Brohan et al 2023;Yang, Gaglione, and Topcu 2022;Chen et al 2023a;You et al 2023) Plan Generation (53) (Sermanet et al 2023;Li et al 2023b;Pallagani et al 2022;Silver et al 2023;Pallagani et al 2023b;Arora and Kambhampati 2023;Fabiano et al 2023;Chalvatzaki et al 2023;Gu et al 2023;Silver et al 2022;Hao et al 2023a;Lin et al 2023b;Yuan et al 2023b;Gandhi, Sadigh, and Goodman 2023;…”
Section: Plan Generationmentioning
confidence: 99%
“…Related to this idea, we will also explore re-planning approaches if we encounter action failure or if action contexts have not been experienced before in the same vein of previous work [25]. Finally, we will also review methods to generalize knowledge and action contexts using semantic similarity [6], [28] to creatively extend concepts at the symbolic level or trajectories at the execution level to new object instances in the physical world.…”
Section: A Future Workmentioning
confidence: 99%
“…Some experiments showed robotic chefs learning from human feedback on the taste of cooked dishes [24]. Some experiments with transcribing recipes into a set of actions [25] and robots cooking from recipes were done [26].…”
Section: Introductionmentioning
confidence: 99%