As a potential alternative to standard null hypothesis significance testing, we describe methods for graphical presentation of data -particularly condition means and their corresponding confidence intervals -for a wide range of factorial designs used in experimental psychology. We describe and illustrate confidence intervals specifically appropriate for between-subject versus within-subject factors. For designs involving more than two levels of a factor, we describe the use of contrasts for graphical illustration of theoretically meaningful components of main effects and interactions. These graphical techniques lend themselves to a natural and straightforward assessment of statistical power.Null hypothesis significance testing (NHST), although hotly debated in the psychological literature on statistical analysis (e.g., Chow, 1998;Cohen, 1990Cohen, , 1994Hagen, 1997;Hunter, 1997;Lewandowsky & Maybery, 1998;Loftus, 1991Loftus, , 1993Loftus, , 1996Loftus, , 2002Schmidt, 1996), is not likely to go away any time soon (Krueger, 2001). Generations of students from multiple disciplines continue to be schooled in the NHST approach to interpreting empirical data, and practicing scientists rely almost reflexively on the logic and methods associated with it. Our goal here is not to extend this debate, but rather to enhance understanding of a particular alternative to NHST for interpreting data. In our view, to the extent that a variety of informative means of constructing inferences from data are made available and clearly understood, researchers will increase their likelihood of forming appropriate conclusions and communicating effectively with their audiences.A number of years ago, we advocated and described computational approaches to the use of confidence intervals as part of a graphical approach to data interpretation (Loftus & Masson, 1994; see also, Loftus, 2002). The power and effectiveness of graphical data presentation is undeniable (Tufte, 1983) and is common in all forms of scientific communication in experimental psychology and in other fields. In many instances, however, plots of descriptive statistics (typically means) are not accompanied by any indication of variability or stability associated with those descriptive statistics. The diligent reader, then, is forced to refer to a dreary accompanying recital of significance tests to determine how the pattern of means should be interpreted.It has become clear through interactions with colleagues and from queries we have received about the use of confidence intervals in conjunction with graphical presentation of data, that more information is needed about practical, computational steps involved in generating confidence intervals, particularly with respect to designs involving interactions among variables. In this article, we briefly explain the logic behind confidence intervals for both between-subject and within-subject designs, then move to a consideration of a range of multifactor designs wherein interaction effects are of interest. Methods for comp...
Null-hypothesis significance testing remains the standard inferential tool in cognitive science despite its serious disadvantages. Primary among these is the fact that the resulting probability value does not tell the researcher what he or she usually wants to know: How probable is a hypothesis, given the obtained data? Inspired by developments presented by Wagenmakers (Psychonomic Bulletin & Review, 14, 779-804, 2007), I provide a tutorial on a Bayesian model selection approach that requires only a simple transformation of sum-of-squares values generated by the standard analysis of variance. This approach generates a graded level of evidence regarding which model (e.g., effect absent [null hypothesis] vs. effect present [alternative hypothesis]) is more strongly supported by the data. This method also obviates admonitions never to speak of accepting the null hypothesis. An Excel worksheet for computing the Bayesian analysis is provided as supplemental material.
An alternative to semantic network models of lexical knowledge representation and access is described, in which knowledge about a word is represented as a pattern of activation across a collection of processing units. In this distributed memory model, semantic priming effects arise naturally from the similarity of the patterns of activation that represent a related prime and target. Priming effects can be reduced by an intervening stimulus that modifies the pattern of activation before the target appears. This process is demonstrated empirically with a word naming task. An implemented version of the distributed memory model is used to simulate these results, and results from previous research in which participants overtly responded to the item that intervened between a prime and target are also simulated. Comparisons with semantic network and compound cue models of priming are discussed.
The influence of semantic ambiguity on word identification processes was explored in a series of word naming and lexical-decision experiments. There was no reliable ambiguity effect in 2 naming experiments, although an ambiguity advantage in lexical decision was obtained when orthographically legal nonwords were used. No ambiguity effect was found in iexical decision when orthographically illegal nonwords were used, implying a semantic locus for the ambiguity advantage. These results were simulated by using a distributed memory model that also produces the ambiguity disadvantage in gaze duration that has been obtained with a reading comprehension task. Ambiguity effects in the model arise from the model's attempt to activate multiple meanings of an ambiguous word in response to presentation of that word's orthographic pattern. Reasons for discrepancies in empirical results and implications for distributed memory models are considered.Any comprehensive theory of mental representation and process must accommodate the complex means by which concepts are communicated through language. Through the course of history, humans have developed tools of communication that facilitate the relaying of ideas and concepts, such as a writing system or orthography. This mapping of concepts to orthography is not entirely one to one, however, resulting in some words that correspond to multiple concepts, which are known as semantically ambiguous words. When reading text, the context provided by preceding words and sentences provides a means of disambiguating such words. As a result, we may not even notice the ambiguity in words that we are reading in context. If, on the other hand, semantically ambiguous words are presented in isolation, their alternative meanings are readily accessible, and thus their ambiguous nature is noticed. In the research reported in this article, we compare performance on semantically ambiguous words with that of semantically unambiguous words in isolated word identification tasks and describe simulations of the empirical effects within the framework of a distributed memory architecture (Masson, 1995).The effect of semantic ambiguity on isolated word identification has usually been determined by comparing performance on unambiguous words (which are associated with only one Ron Borowsky and Michael E. J. Masson, Department of Psychology, University of Victoria, British Columbia, Canada.This research was supported by the Natural Sciences and Engineering Research Council of Canada. We thank Steve Lindsay for use of his equipment. We are grateful to Derek Besner and Paul Fera for suggesting that neighborhood density be examined in these experiments and to Rick Bourassa for suggesting the delayed naming experiment.Correspondence concerning this article should be addressed to Ron Borowsky, who is now at the Department of Psychology, 9 Campus Drive, University of Saskatchewan, Saskatoon, Saskatchewan, Canada STN 5A5, or to Michael E. J. Masson, Department of Psychology, University of Victoria, P.O. Box 3050, Victor...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.