2022
DOI: 10.7554/elife.75485
|View full text |Cite
|
Sign up to set email alerts
|

One-shot generalization in humans revealed through a drawing task

Abstract: Humans have the amazing ability to learn new visual concepts from just a single exemplar. How we achieve this remains mysterious. State-of-the-art theories suggest observers rely on internal ‘generative models’, which not only describe observed objects, but can also synthesize novel variations. However, compelling evidence for generative models in human one-shot learning remains sparse. In most studies, participants merely compare candidate objects created by the experimenters, rather than generating their own… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 80 publications
(121 reference statements)
0
4
0
Order By: Relevance
“…Rather, they had to simulate its detailed effects on a different object (e.g., where a twist might or might not produce twirls). Also, even though drawing is a powerful and rich tool for measuring mental representations compared with psychophysical methods (e.g., Bainbridge et al, 2019;Hall et al, 2021;Sayim & Wagemans, 2017;Tiedemann et al, 2022), responses are far more variable than with a two-alternative forcedchoice categorization task. This is a consequence of individual variability in, for example, motor abilities, backgrounds in culture or graphic systems, or drawing expertise (also our participants were no artists; e.g., Bainbridge, 2021;Chamberlain et al, 2019;Cohn, 2020;Kozbelt & Ostrofsky, 2018).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Rather, they had to simulate its detailed effects on a different object (e.g., where a twist might or might not produce twirls). Also, even though drawing is a powerful and rich tool for measuring mental representations compared with psychophysical methods (e.g., Bainbridge et al, 2019;Hall et al, 2021;Sayim & Wagemans, 2017;Tiedemann et al, 2022), responses are far more variable than with a two-alternative forcedchoice categorization task. This is a consequence of individual variability in, for example, motor abilities, backgrounds in culture or graphic systems, or drawing expertise (also our participants were no artists; e.g., Bainbridge, 2021;Chamberlain et al, 2019;Cohn, 2020;Kozbelt & Ostrofsky, 2018).…”
Section: Discussionmentioning
confidence: 99%
“…Our study also illustrates once again how we can use drawing as a tool to measure mental representations, without introducing experimenter bias by for example preselecting responses for participants to choose from. This is especially important when mapping out mental representational spaces (e.g., of object or scene categories; Bainbridge et al, 2019;Tiedemann et al, 2022).…”
Section: Discussionmentioning
confidence: 99%
“…We experimentally show that our novel SNN model enriched by Hebbian plasticity outperforms state-of-the-art deeplearning mechanisms of long short-term memory (LSTM) networks [36], [37] and the long short-term memory spiking neural networks (LSNNs) [3], [14] in a sequential patternmemorization task, as well as demonstrate superior out-ofdistribution generalization capabilities compared to these models. The contemporary exceptional performance of standard deep-learning mechanisms strictly relies on the availability of a large number of training examples, whereas humans are capable of learning new tasks based on a single exposure (oneshot learning) [38]. We show that our memory-equipped SNN model provides a novel SNN-based solution to this problem and demonstrate that it can be successfully applied to one-shot learning and classification of handwritten characters, improving over previous SNN models [39], [40].…”
mentioning
confidence: 92%
“…It is well-established that humans can learn new concepts from a few examples, but just how few can it be? Previous research in both human and machine learning has treated one-shot learning, where the participant must learn a new concept from a single example, as the limit on sample-efficiency in supervised learning settings (Tiedemann et al, 2022;Fei-Fei et al, 2006a). Recent research in machine learning has shown that it is theoretically possible to learn more novel concepts than the number of presented examples, so-called less-than-one-shot (LO-shot) learning (Sucholutsky & Schonlau, 2021a;Sucholutskv et al, 2021), by associating examples with "soft labels" that describe their closeness to each concept as opposed to traditionally-used "hard labels" which associate each example with a single concept.…”
Section: Introductionmentioning
confidence: 99%