2017
DOI: 10.31234/osf.io/a4hs9
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Bayesian Hierarchical Finite Mixture Models of Reading Times: A Case Study

Abstract: This theoretical note presents a case study demonstrating the importance of Bayesian hierarchical mixture models as a modelling tool for evaluating the predictions of competing theories of cognitive processes. This note also contributes to improving current practices in data analysis in the psychological sciences. As a case study, we revisit two published data sets from psycholinguistics. In sentence comprehension, it is widely assumed that the distance between linguistic co-dependents affects the latency of d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2018
2018
2019
2019

Publication Types

Select...
3
1

Relationship

3
1

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 15 publications
0
6
0
Order By: Relevance
“…For example, if there are some (say, 5%) 0 ms VOTs in a data-set (e.g., due to speech errors or some other reason), or if there is a mixture of distributions generating the data (as in the case of the English voiced stops produced by the 13 speakers who had prevoicing in some of their tokens), and the model assumes a Gaussian likelihood, the posterior predictive distributions and the distribution of the data will not line up. For a real-life example of such a situation, see Vasishth et al (2017). In Figure 7, we use the Mandarin data to simulate such a situation by randomly replacing 5% of the data with 0 ms values.…”
Section: Research Questionsmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, if there are some (say, 5%) 0 ms VOTs in a data-set (e.g., due to speech errors or some other reason), or if there is a mixture of distributions generating the data (as in the case of the English voiced stops produced by the 13 speakers who had prevoicing in some of their tokens), and the model assumes a Gaussian likelihood, the posterior predictive distributions and the distribution of the data will not line up. For a real-life example of such a situation, see Vasishth et al (2017). In Figure 7, we use the Mandarin data to simulate such a situation by randomly replacing 5% of the data with 0 ms values.…”
Section: Research Questionsmentioning
confidence: 99%
“…This method of model selection is useful, however, when one is interested in comparing the predictive performance of very different competing models. For fully worked examples of this approach (with reproducible code and data) in the context of cognitive modeling in psycholinguistics, see Nicenboim and Vasishth (2018) and Vasishth et al (2017).…”
Section: Research Questionsmentioning
confidence: 99%
“…Second, Bayesian procedures allow us to fit virtually any kind of distribution in a straightforward way. In the past, we have fit hierarchical mixture models (Nicenboim & Vasishth, 2018;Vasishth, Nicenboim, Chopin, & Ryder, 2017) and hierarchical measurement error models (Nicenboim, Roettger, & Vasishth, 2017;Vasishth, Beckman, Nicenboim, Li, & Kong, 2017). In this paper, we fit shifted lognormal mixed models, which lie outside the class of generalized linear models.…”
Section: Advantages Of Bayesian Modelingmentioning
confidence: 99%
“…Second, Bayesian procedures allow us to fit virtually any kind of distribution in a straightforward way. In the past, we have fit hierarchical mixture models Vasishth, Nicenboim, Chopin, & Ryder, 2017), and hierarchical measurement error models (Nicenboim, Roettger, & Vasishth, 2017;Vasishth, Beckman, Nicenboim, Li, & Kong, 2017). In this paper, we fit shifted lognormal mixed models, which lie outside the class of generalized linear models.…”
Section: Bayesian Data Analysis For Statistical Inferencementioning
confidence: 99%