2022
DOI: 10.1037/rev0000381
|View full text |Cite
|
Sign up to set email alerts
|

As within, so without, as above, so below: Common mechanisms can support between- and within-trial category learning dynamics.

Abstract: Two fundamental difficulties when learning novel categories are deciding (a) what information is relevant and (b) when to use that information. Although previous theories have specified how observers learn to attend to relevant dimensions over time, those theories have largely remained silent about how attention should be allocated on a within-trial basis, which dimensions of information should be sampled, and how the temporal order of information sampling influences learning. Here, we use the adaptive attenti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

2
20
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

3
4

Authors

Journals

citations
Cited by 12 publications
(22 citation statements)
references
References 199 publications
2
20
0
Order By: Relevance
“…One computational model we discuss here is the adaptive attention representation model (AARM; Galdo et al, 2022; Turner, 2019; Weichart, Evans, et al, 2022; Weichart, Galdo, et al, 2022), which is derived from other exemplar-based models of categorization (Estes, 1986; Medin & Schaffer, 1978; Nosofsky, 1986). In AARM (as well as other models in this class), attention is represented as a vector containing the amount of attention for each dimension of information.…”
Section: Optimizing For a Learner’s Goalsmentioning
confidence: 99%
See 3 more Smart Citations
“…One computational model we discuss here is the adaptive attention representation model (AARM; Galdo et al, 2022; Turner, 2019; Weichart, Evans, et al, 2022; Weichart, Galdo, et al, 2022), which is derived from other exemplar-based models of categorization (Estes, 1986; Medin & Schaffer, 1978; Nosofsky, 1986). In AARM (as well as other models in this class), attention is represented as a vector containing the amount of attention for each dimension of information.…”
Section: Optimizing For a Learner’s Goalsmentioning
confidence: 99%
“…As attention increases for one particular dimension, it enhances that dimension’s contribution to the category response as well as the probability of fixating to that dimension during the decision period. When we make explicit the connection between fixation probability and strength of encoding ( Weichart, Galdo, et al, 2022 ), AARM expresses differential encoding such that features that were thought to be relevant at one time point in the learning sequence are more strongly encoded in memory and will have a larger impact on both decisions and encoding in the future. One of AARM’s central characteristics is its commitment to understanding how attention is deployed on a trial-by-trial basis as learners interact with their environment.…”
Section: Optimizing For a Learner’s Goalsmentioning
confidence: 99%
See 2 more Smart Citations
“…Moreover, the dynamics of orienting attention to memory representations are analogous to the dynamics of orienting attention to a target element in an external stimulus (Logan et al, 2022). Attention can also shift rapidly within a trial in order to perform categorization tasks (Weichart et al, 2022). Finally, rapid within-trial shifts of attention have been shown to be able to account for the extra-list feature effect (Mewhort & Johns, 2000) in recognition memory (Osth et al, In press).…”
Section: Capacity and Attentionmentioning
confidence: 99%