The authors present a model to account for the miscombination of features when stimuli are presented using the rapid serial visual presentation (RSVP) technique (illusory conjunctions in the time domain). It explains the distributions of responses through a mixture of trial outcomes. In some trials, attention is successfully focused on the target, whereas in others, the responses are based on partial information. Two experiments are presented that manipulated the mean processing time of the target-defining dimension and of the to-be-reported dimension, respectively. As predicted, the average origin of the responses is delayed when lengthening the target-defining dimension, whereas it is earlier when lengthening the to-be-reported dimension; in the first case the number of correct responses is dramatically reduced, whereas in the second it does not change. The results, a review of other research, and simulations carried out with a formal version of the model are all in close accordance with the predictions.
A meta-analysis of the reliability of the scores from a specific test, also called reliability generalization, allows the quantitative synthesis of its properties from a set of studies. It is usually assumed that part of the variation in the reliability coefficients is due to some unknown and implicit mechanism that restricts and biases the selection of participants in the studies' samples. Sometimes this variation has been reduced by adjusting the coefficients by a formula associated with range restrictions. We propose a framework in which that variation is included (instead of adjusted) in the models intended to explain the variability and in which parallel analyses of the studies' means and variances are performed. Furthermore, the analysis of the residuals enables inferences to be made about the nature of the variability accounted for by moderator variables. The meta-analysis of the 3 studies' statistics-reliability coefficient, mean, and variance--allows psychometric inferences about the test scores. A numerical example illustrates the proposed framework.
Sequential rules are explored in the context of null hypothesis significance testing. Several studies have demonstrated that the fixed-sample stopping rule, in which the sample size used by researchers is determined in advance, is less practical and less efficient than sequential stopping rules. It is proposed that a sequential stopping rule called CLAST (composite limited adaptive sequential test) is a superior variant of COAST (composite open adaptive sequential test), a sequential rule proposed by Frick (1998). Simulation studies are conducted to test the efficiency of the proposed rule in terms of sample size and power. Two statistical tests are used: the one-tailed t test of mean differences with two matched samples, and the chi-square independence test for twofold contingency tables. The results show that the CLAST rule is more efficient than the COAST rule and reflects more realistically the practice of experimental psychology researchers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.