2004
DOI: 10.1016/j.cogpsych.2003.11.001
|View full text |Cite
|
Sign up to set email alerts
|

Assessing the distinguishability of models and the informativeness of data

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
94
1

Year Published

2007
2007
2018
2018

Publication Types

Select...
8
1
1

Relationship

1
9

Authors

Journals

citations
Cited by 80 publications
(95 citation statements)
references
References 55 publications
0
94
1
Order By: Relevance
“…The qualitative and quantitative predictions of the mixtureof-processes hypothesis were compared to two existing And one final remark: Note that the two-stage-processing model had the fewest parameters (12, rather than 13 for the bound-change hypothesis and 14 for the mixture-ofprocesses hypothesis) but gave the best account of the data. As is well known (see, e.g., Navarro, Pitt, & Myung, 2004;Wagenmakers, Ratcliff, Gomez, & Iverson, 2004), the number of parameters is not the only criterion for model comparison. In principle, one hypothesis or model might well be more flexible than others, and therefore fitting the model to data would be easier for that model than for the others.…”
Section: Do Different Experimental Setups Influence Choice Proportions?mentioning
confidence: 99%
“…The qualitative and quantitative predictions of the mixtureof-processes hypothesis were compared to two existing And one final remark: Note that the two-stage-processing model had the fewest parameters (12, rather than 13 for the bound-change hypothesis and 14 for the mixture-ofprocesses hypothesis) but gave the best account of the data. As is well known (see, e.g., Navarro, Pitt, & Myung, 2004;Wagenmakers, Ratcliff, Gomez, & Iverson, 2004), the number of parameters is not the only criterion for model comparison. In principle, one hypothesis or model might well be more flexible than others, and therefore fitting the model to data would be easier for that model than for the others.…”
Section: Do Different Experimental Setups Influence Choice Proportions?mentioning
confidence: 99%
“…What is the purpose, the reviewer wondered, of comparing the models if we already know that they are wrong? The question speaks to some important issues in model development and testing (for discussions of other issues in model testing, see Navarro, Pitt & Myung 2004;Pitt & Myung 2002).…”
Section: Model Predictions With Backward Maskingmentioning
confidence: 99%
“…Model comparison using model mimicry simulations Wagenmakers et al (2004) presented a general method to quantify model mimicry, termed the parametric bootstrap cross-fitting method, PBCM (see Navarro, Pitt, & Myung, 2004, for a similar technique). Consider a comparison between Model A and Model B.…”
mentioning
confidence: 99%