2022
DOI: 10.1038/s42256-022-00591-4
|View full text |Cite
|
Sign up to set email alerts
|

Language and culture internalization for human-like autotelic AI

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 20 publications
(12 citation statements)
references
References 57 publications
0
12
0
Order By: Relevance
“…Language-conditioned agents, language for goalexploration Building language-instructable agents has been one of the aims of AI research since its inception and is still a very active area of research today in machine learning (Anderson et al, 2018;Luketina et al, 2019) and robotics (Tellex et al, 2020); notable recent breakthroughs were achieved through use of large-scale pre-trained foundation models for planning (Ahn et al, 2022;Huang et al, 2022) and multi-modal grounding (Fan et al, 2022;Jiang et al, 2022). Language has been found to be beneficial for goal-exploration as well, by enabling abstraction (Mu et al, 2022;Tam et al, 2022), combination of different abstraction levels (Mirchandani et al, 2021) and goal imagination (Colas et al, 2022a) supported by systematic generalization (Bahdanau et al, 2019). Go-explore has also been studied in the context of text environments (Madotto et al, 2021); albeit in very simple text environments with comparatively few valid actions compared to ScienceWorld and not in a multi-goal setting, as well as having distinct exploration and policy learning phases.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Language-conditioned agents, language for goalexploration Building language-instructable agents has been one of the aims of AI research since its inception and is still a very active area of research today in machine learning (Anderson et al, 2018;Luketina et al, 2019) and robotics (Tellex et al, 2020); notable recent breakthroughs were achieved through use of large-scale pre-trained foundation models for planning (Ahn et al, 2022;Huang et al, 2022) and multi-modal grounding (Fan et al, 2022;Jiang et al, 2022). Language has been found to be beneficial for goal-exploration as well, by enabling abstraction (Mu et al, 2022;Tam et al, 2022), combination of different abstraction levels (Mirchandani et al, 2021) and goal imagination (Colas et al, 2022a) supported by systematic generalization (Bahdanau et al, 2019). Go-explore has also been studied in the context of text environments (Madotto et al, 2021); albeit in very simple text environments with comparatively few valid actions compared to ScienceWorld and not in a multi-goal setting, as well as having distinct exploration and policy learning phases.…”
Section: Related Workmentioning
confidence: 99%
“…Learning to represent, imagine and sample goals to learn skills that humans care about requires interactions with human socio-cultural worlds (see argument in introduction, and Colas et al (2022a)). Autotelic agents must first internalize the goal representations of humans before they can learn corresponding skills, build upon them and contribute back to a shared human-machine cultural evolution.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…However, this method only works fine when the task goals are continuous or densely distributed (e.g., setting the destination coordinates of objects as goals). For discrete or sparsely distributed goals in the form of semantic configurations (Akakzia et al, 2021 ) or natural language (Colas et al, 2020 ), which more conform to the human habits of giving instructions, the trajectories that can achieve the concerned goals account for a rather small proportion. Only from these trajectories can the goal relabeling method discover useful learning signals.…”
Section: Introductionmentioning
confidence: 99%