2018
DOI: 10.3758/s13423-018-1501-2
|View full text |Cite
|
Sign up to set email alerts
|

Using experiential optimization to build lexical representations

Abstract: To account for natural variability in cognitive processing, it is standard practice to optimize a model's parameters by fitting it to behavioral data. Although most language-related theories acknowledge a large role for experience in language processing, variability reflecting that knowledge is usually ignored when evaluating a model's fit to representative data. We fit language-based behavioral data using experiential optimization, a method that optimizes the materials that a model is given while retaining th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
43
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 32 publications
(45 citation statements)
references
References 110 publications
(203 reference statements)
2
43
0
Order By: Relevance
“…Nonetheless, a reviewer suggested an additional note of caution in such modeling enterprises. Representations in semantic space models are highly dependent on the choice of corpus (Johns et al, 2019;Mandera et al, 2017) -this is illustrated in Supplementary Materials A where comparison between BEAGLE on two different corpora reveals different abilities to predict human performance in semantic ratings and synonym selection, and in Supplementary Materials C where we demonstrate different weight parameters of the global similarity metrics when the TASA corpus is used instead of our novels corpus. In addition, Morton and Polyn (2016) demonstrated how results in free recall are contingent both on the choice of semantic space model and retrieval model.…”
Section: Matching Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…Nonetheless, a reviewer suggested an additional note of caution in such modeling enterprises. Representations in semantic space models are highly dependent on the choice of corpus (Johns et al, 2019;Mandera et al, 2017) -this is illustrated in Supplementary Materials A where comparison between BEAGLE on two different corpora reveals different abilities to predict human performance in semantic ratings and synonym selection, and in Supplementary Materials C where we demonstrate different weight parameters of the global similarity metrics when the TASA corpus is used instead of our novels corpus. In addition, Morton and Polyn (2016) demonstrated how results in free recall are contingent both on the choice of semantic space model and retrieval model.…”
Section: Matching Modelsmentioning
confidence: 99%
“…Our BEAGLE representations were constructed from a corpus of novels, which has been used in previous publications (Johns & Jamieson, 2018;Johns, Jones, & Mewhort, 2019;Mewhort et al, 2018). The corpus contained 39,076 unique words in 10,238,600…”
Section: Constructing Word Representations With Beaglementioning
confidence: 99%
“…Younger and older adults differ in occupational status [37], social networks [38], and their use of the Internet and social media [39]. These differences in experience further contribute to shaping the contents of younger and older adults' lexical and semantic representations [40]. Regrettably, the extent to which differences in the amount and content of information exposed to younger and older adults determines their lexical and semantic representations and cognitive performance remains largely unexplored.…”
Section: Different Environmentsmentioning
confidence: 99%
“…By manipulating the parameter t , the subsampling distribution is changed. The correct setting for this parameter is likely corpus dependent, as there are significant deviations in frequency distributions by type of corpus used (see Johns & Jamieson, and Johns, Jones, & Mewhort, , for examples).…”
Section: Modeling Frameworkmentioning
confidence: 99%
“…We used a corpus of 20 million sentences, derived by combining Wikipedia articles and non‐fiction books (Johns, Jones, & Mewhort, ; Johns et al., in press). The corpus consists of approximately 120 million words.…”
Section: Modeling Frameworkmentioning
confidence: 99%