I offer an analysis of operationism in psychology, which is rooted in an historical study of the investigative practices of two of its early proponents (S. S. Stevens and E. C. Tolman). According to this analysis, early psychological operationists emphasized the importance of experimental operations and called for scientists to specify what kinds of operations were to count as empirical indicators for the referents of their concepts. While such specifications were referred to as "definitions," I show that such definitions were not taken to constitute a priori knowledge or be analytically true. Rather, they served the pragmatic function of enabling scientists to do research on a purported phenomenon. I argue that historical and philosophical discussions of problems with operationism have conflated it, both conceptually and historically, with positivism, and I raise the question of what are the "real" issues behind the debate about operationism.
Why Replication is Overrated Current debates about the replication crisis in psychology take it for granted that direct replication is valuable and focus their attention on questionable research practices in regard to statistical analyses. This paper takes a broader look at the notion of replication as such. It is argued that all experimentation/replication involves individuation judgments and that research in experimental psychology frequently turns on probing the adequacy of such judgments. In this vein, I highlight the ubiquity of conceptual and material questions in research, and I argue that replication is not as central to psychological research as it is sometimes taken to be. 1. Introduction: The "Replication Crisis" In the current debate about replicability in psychology, we can distinguish between (1) the question of why not more replication studies are done (e.g., Romero 2017) and (2) the question of why a significant portion (more than 60%) of studies, when they are done, fail to replicate (I take this number from the Open Science Collaboration, 2015). Debates about these questions have been dominated by two assumptions, namely, first, that it is in general desirable that scientists conduct replication studies that come as close as possible to the original, and second, that the low replication rate can often be attributed to statistical problems with many initial studies, sometimes referred to as "p-hacking" and "data-massaging." 1 1 An important player in this regard is the statistician Andrew Gelman who has been using his blog as a public platform to debate methodological problems with mainstream social psychology (http://andrewgelman.com/).
This paper asks (a) how new scientific objects of research are conceptualized at a point in time when little is known about them, and (b) how those conceptualizations, in turn, figure in the process of investigating the phenomena in question. Contrasting my approach with existing notions of concepts and situating it in relation to existing discussions about the epistemology of experimentation, I propose to think of concepts as research tools. I elaborate on the conception of a tool that informs my account. Narrowing my focus to phenomena in cognitive neuropsychology, I then illustrate my thesis with the example of the concept of implicit memory. This account is based on an original reconstruction of the nature and function of operationism in psychology.
The last two decades have seen a rising interest in (a) the notion of a scientific phenomenon as distinct from theories and data, and (b) the intricacies of experimentally producing and stabilizing phenomena. This paper develops an analysis of the stabilization of phenomena that integrates two aspects that have largely been treated separately in the literature: one concerns the skills required for empirical work; the other concerns the strategies by which claims about phenomena are validated. I argue that in order to make sense of the process of stabilization, we need to distinguish between two types of phenomena: phenomena as patterns in the data ("surface regularities") and phenomena as underlying (or "hidden") regularities. I show that the epistemic relationships that data bear to each of these types of phenomena are different: Data patterns are instantiated by individual data, whereas underlying regularities are indicated by individual data, insofar as they instantiate a data pattern. Drawing on an example from memory research, I argue that neither of these two kinds of phenomenon can be stabilized in isolation. I conclude that what is stabilized when phenomena are stabilized is the fit between surface regularities and hidden regularities.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations鈥揷itations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright 漏 2024 scite LLC. All rights reserved.
Made with 馃挋 for researchers
Part of the Research Solutions Family.