2023
DOI: 10.1609/aaai.v37i7.26017
|View full text |Cite
|
Sign up to set email alerts
|

A Data Source for Reasoning Embodied Agents

Abstract: Recent progress in using machine learning models for reasoning tasks has been driven by novel model architectures, large-scale pre-training protocols, and dedicated reasoning datasets for fine-tuning. In this work, to further pursue these advances, we introduce a new data generator for machine reasoning that integrates with an embodied agent. The generated data consists of templated text queries and answers, matched with world-states encoded into a database. The world-states are a result of both world dynam… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 32 publications
0
3
0
Order By: Relevance
“…CoT has been shown to significantly enhance performance on complex reasoning tasks such as arithmetic and commonsense reasoning. This success has spurred several subsequent works that adapt different strategies to improve CoT, including self-consistency , Least-to-Most prompting , self-notes (Lanchantin et al, 2023), Progressive-Hint Prompting , and selfpolish . Despite the remarkable success of CoT prompting, previous studies have primarily focused on how to use CoT to achieve the best results while ignoring how to construct prompting examples.…”
Section: Chain-of-thought Promptmentioning
confidence: 99%
See 1 more Smart Citation
“…CoT has been shown to significantly enhance performance on complex reasoning tasks such as arithmetic and commonsense reasoning. This success has spurred several subsequent works that adapt different strategies to improve CoT, including self-consistency , Least-to-Most prompting , self-notes (Lanchantin et al, 2023), Progressive-Hint Prompting , and selfpolish . Despite the remarkable success of CoT prompting, previous studies have primarily focused on how to use CoT to achieve the best results while ignoring how to construct prompting examples.…”
Section: Chain-of-thought Promptmentioning
confidence: 99%
“…CoT has shown promising results in improving the reasoning abilities of LLMs. Various strategies like self-notes (Lanchantin et al, 2023), progressive-hint prompting and Least-to-Most prompting , have been proposed to enhance CoT further.…”
Section: Introductionmentioning
confidence: 99%
“…What definitively constitutes meaning, or understanding has been popular in linguistic literature to be a function of at least the underlying text and embodied cues. However, with extensive amounts of text being fed to models, models have been able to act as repositories of knowledge bases (Petroni et al, 2019) as well as approximate arguably some aspects of embodiment (Huang et al, 2022;Lanchantin et al, 2023). So, while one definitely can't discount Selbst et al ( 2019)'s recommendations that many of the contextual and politically contested topics should not be technology forced, LLMs do not seem completely handicapped for subjective tasks which require a high degree of uncertainty -For example, Thomas et al (2023) show how LLMs can be used to accurately model searcher preferences or when LLMs are used to replace human evaluations (Chiang and Lee, 2023) -tasks which generally require a lot of human annotation effort.…”
Section: The Solutionism Trapmentioning
confidence: 99%