2019
DOI: 10.1007/s42113-019-00053-y
|View full text |Cite
|
Sign up to set email alerts
|

People Infer Recursive Visual Concepts from Just a Few Examples

Abstract: Machine learning has made major advances in categorizing objects in images, yet the best algorithms miss important aspects of how people learn and think about categories. People can learn richer concepts from fewer examples, including causal models that explain how members of a category are formed. Here, we explore the limits of this human ability to infer causal "programs" -latent generating processes with nontrivial algorithmic properties -from one, two, or three visual examples. People were asked to extrapo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
21
0

Year Published

2020
2020
2025
2025

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 23 publications
(22 citation statements)
references
References 42 publications
1
21
0
Order By: Relevance
“…We allow these to take absolute feature values, like color(r')⇐blue, but also values relative to the agent or recipient’s pre-interaction features, such as color(r') ⇐color(a) or edge(r') ⇐edge(r)+ 1. These causal functions natively capture many kinds of causal theories people may entertain, as confirmed by their self-reports and our model fits (see also Bramley et al, 2018 ; Goodman et al, 2008 ; Lake & Piantadosi 2020 ). Moreover, by grounding causal functions in such object-based representations, these causal functions naturally generalize to novel objects.…”
Section: Discussionsupporting
confidence: 71%
See 1 more Smart Citation
“…We allow these to take absolute feature values, like color(r')⇐blue, but also values relative to the agent or recipient’s pre-interaction features, such as color(r') ⇐color(a) or edge(r') ⇐edge(r)+ 1. These causal functions natively capture many kinds of causal theories people may entertain, as confirmed by their self-reports and our model fits (see also Bramley et al, 2018 ; Goodman et al, 2008 ; Lake & Piantadosi 2020 ). Moreover, by grounding causal functions in such object-based representations, these causal functions naturally generalize to novel objects.…”
Section: Discussionsupporting
confidence: 71%
“…Symbolic approaches enable compositionality and systematicity, while the sub-symbolic techniques, especially the fast, incremental approximations, make this more scalable to real-world data (Bramley et al, 2017 ). This framework also draws a close link with probabilistic program induction models (e.g., Bramley et al, 2018 ; Ellis et al, 2021 ; Lake et al, 2015 ; Lake & Piantadosi 2020 ), where causal beliefs and concepts can be viewed as programs, and accurate generalizations can be viewed as a evidence for successful program synthesis whereby these programs increasingly reflect the true causal laws of nature. We believe our modeling framework can be extended to broader generalization cases beyond causal cognition, and contributes to the collective effort for a hybrid approach in understanding human cognition (Lake et al, 2017 ; Oaksford et al, 2007 ; Valentin et al, 2021 ).…”
Section: Discussionmentioning
confidence: 95%
“…The learning as programming approach, however, is importantly different in providing learners the full expressive power of symbolic programs both theoretically (i.e., Turing completeness) and practically (i.e., freedom to adopt any formal syntax). This approach applies broadly to developmental phenomena, including counting [52], concept learning [13,53], function words [54], kinship [55], theory learning [56,57], lexical acquisition [23], question answering [15], semantics and pragmatics [25,58,59], recursive reasoning [60], sequence transformation [61], sequence prediction [18,62], structure learning [63], action concepts [64], perceptual understanding [14,65], and causality [66]. These applications build on a tradition of studying agents who understand the world by inferring computational processes that could have generated observed data, which is optimal in a certain sense [67,68], and aligns with rational constructivist models of development [69][70][71][72].…”
Section: Trends Trends In In Cognitivementioning
confidence: 99%
“…It also draws a close link with probabilistic program induction models (e.g. Bramley et al, 2018;Ellis et al, 2020;Lake et al, 2015;Lake & Piantadosi, 2020), where causal beliefs and concepts can be viewed as programs, and accurate generalizations can be viewed as a evidence for successful program synthesis whereby these programs increasingly reflect the true causal laws of nature.…”
Section: Constructive Cognitionmentioning
confidence: 96%