2016
DOI: 10.1080/17470218.2015.1130068
|View full text |Cite
|
Sign up to set email alerts
|

Applying an exemplar model to an implicit rule-learning task: Implicit learning of semantic structure

Abstract: Studies of implicit learning often examine peoples' sensitivity to sequential structure. Computational accounts have evolved to reflect this bias. An experiment conducted by Neil and Higham [Neil, G. J., & Higham, P. A.(2012). Implicit learning of conjunctive rule sets: An alternative to artificial grammars. Consciousness and Cognition, 21, 1393-1400] points to limitations in the sequential approach. In the experiment, participants studied words selected according to a conjunctive rule. At test, participants d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2018
2018
2025
2025

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 20 publications
(19 citation statements)
references
References 24 publications
0
19
0
Order By: Relevance
“…More generally, the present work points to the importance of models capable of extracting information from large text bases, an issue that has been explored in greater detail elsewhere (e.g., Chubala et al, 2016;Hills, Jones, & Todd, 2012;Johns & Jones, 2010;Johns, Jones, & Mewhort, 2012;Johns & Jones, 2015;Johns, Taler, et al, 2017;Mewhort et al, 2017;Taler et al, 2013). Basing a model's performance on largescale environmental information provides a strong case for the model's plausibility, because it can scale to human levels of experience.…”
Section: Discussionmentioning
confidence: 69%
See 1 more Smart Citation
“…More generally, the present work points to the importance of models capable of extracting information from large text bases, an issue that has been explored in greater detail elsewhere (e.g., Chubala et al, 2016;Hills, Jones, & Todd, 2012;Johns & Jones, 2010;Johns, Jones, & Mewhort, 2012;Johns & Jones, 2015;Johns, Taler, et al, 2017;Mewhort et al, 2017;Taler et al, 2013). Basing a model's performance on largescale environmental information provides a strong case for the model's plausibility, because it can scale to human levels of experience.…”
Section: Discussionmentioning
confidence: 69%
“…It is known that an exemplar memory system can accomplish some of the fundamental operations of language (Abbot-Smith & Tomasello, 2006). Jamieson and Mewhort (2009a, 2009b and Chubala, Johns, Jamieson, and Mewhort (2016), for example, have shown that such a model can account for several artificial-grammar and implicit-learning results. Johns and Jones (2015) extended the approach; their account explained additional results across sentence processing, grounded cognition, and the cultural evolution of language.…”
Section: Example 3: Sentence Processingmentioning
confidence: 99%
“…Prior literature in the field of Implicit learning has primarily focused on artificial grammarlearning tasks [31,32] and sequence learning [33], but few studies have used self-reports in conditioning experiments [34,35]. In these latter studies, however, no implicit conditioned responses were found.…”
Section: Introductionmentioning
confidence: 99%
“…In the GCM, similarity is raised to the power of c, where c is a sensitivity parameter set by the modeller. Conversely, in MINERVA 2, the exponent is always 3, though some variants of MINERVA dynamically vary the exponent (e.g., Mewhort & Johns, 2005) or use a larger exponent to minimize noise (e.g., Johns et al, 2016).…”
Section: Multi-vector Modelsmentioning
confidence: 99%
“…To preserve the sign of the similarity, the MINERVA 2 model uses an exponent 3 rather than 2. Larger odd numbered exponents (5, 7, 9, ...) also preserve the sign of the similarity and can be used to further reduce the amount of noise in the echo (e.g., Johns et al, 2016).…”
Section: Minerva Versus Vector and Matrix Modelsmentioning
confidence: 99%