2022
DOI: 10.48550/arxiv.2212.01944
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning Automata-Based Task Knowledge Representation from Large-Scale Generative Language Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…Efforts to generate multimodal, text, and image-based goalconditioned plans are exemplified by (Lu et al 2023b). Additionally, a subset of studies in this survey investigates the fine-tuning of seq2seq, code-based language models (Pallagani et al 2022(Pallagani et al , 2023b, which are noted for their advanced Application of LLMs in Planning Language Translation ( 23) Xie et al 2023;Guan et al 2023;Chalvatzaki et al 2023;Yang, Ishay, and Lee 2023;Wong et al 2023;Kelly et al 2023;Lin et al 2023c;Sakib and Sun 2023;Yang et al 2023b;Parakh et al 2023;Dai et al 2023;Yang et al 2023a;Shirai et al 2023;Ding et al 2023b;Zelikman et al 2023;Pan et al 2023;Xu et al 2023b;Brohan et al 2023;Yang, Gaglione, and Topcu 2022;Chen et al 2023a;You et al 2023) Plan Generation (53) (Sermanet et al 2023;Li et al 2023b;Pallagani et al 2022;Silver et al 2023;Pallagani et al 2023b;Arora and Kambhampati 2023;Fabiano et al 2023;Chalvatzaki et al 2023;Gu et al 2023;Silver et al 2022;Hao et al 2023a;Lin et al 2023b;Yuan et al 2023b;Gandhi, Sadigh, and Goodman 2023;…”
Section: Plan Generationmentioning
confidence: 99%
“…Efforts to generate multimodal, text, and image-based goalconditioned plans are exemplified by (Lu et al 2023b). Additionally, a subset of studies in this survey investigates the fine-tuning of seq2seq, code-based language models (Pallagani et al 2022(Pallagani et al , 2023b, which are noted for their advanced Application of LLMs in Planning Language Translation ( 23) Xie et al 2023;Guan et al 2023;Chalvatzaki et al 2023;Yang, Ishay, and Lee 2023;Wong et al 2023;Kelly et al 2023;Lin et al 2023c;Sakib and Sun 2023;Yang et al 2023b;Parakh et al 2023;Dai et al 2023;Yang et al 2023a;Shirai et al 2023;Ding et al 2023b;Zelikman et al 2023;Pan et al 2023;Xu et al 2023b;Brohan et al 2023;Yang, Gaglione, and Topcu 2022;Chen et al 2023a;You et al 2023) Plan Generation (53) (Sermanet et al 2023;Li et al 2023b;Pallagani et al 2022;Silver et al 2023;Pallagani et al 2023b;Arora and Kambhampati 2023;Fabiano et al 2023;Chalvatzaki et al 2023;Gu et al 2023;Silver et al 2022;Hao et al 2023a;Lin et al 2023b;Yuan et al 2023b;Gandhi, Sadigh, and Goodman 2023;…”
Section: Plan Generationmentioning
confidence: 99%