Checks on baseline differences in randomized controlled trials (RCTs) are often done using null-hypothesis significance tests (NHSTs). However, the use of NHSTs to establish the degree of baseline similarity is inappropriate, potentially misleading, and, simply, logically incoherent.
Within evolutionary biology, life-history theory is used to explain cross-species differences in allocation strategies regarding reproduction, maturation, and survival. Behavioral scientists have recently begun to conceptualize such strategies as a within-species individual characteristic that is predictive of behavior. Although life history theory provides an important framework for behavioral scientists, the psychometric approach to life-history strategy measurement—as operationalized by K-factors—involves conceptual entanglements. We argue that current psychometric approaches attempting to identify K-factors are based on an unwarranted conflation of functional descriptions and proximate mechanisms—a conceptual mix-up that may generate unviable hypotheses and invites misinterpretation of empirical findings. The assumptions underlying generic psychometric methodology do not allow measurement of functionally defined variables; rather these methods are confined to Mayr’s proximate causal realm. We therefore conclude that K-factor scales lack validity, and that life history strategy cannot be identified with psychometrics as usual. To align theory with methodology, suggestions for alternative methods and new avenues are proposed.
Experiments in psychology often target hypothetical constructs to test some causal hypothesis or theory. In light of this goal, it is pertinent to use a manipulation that actually changes the focal hypothetical construct, and only that construct. In assessing whether such manipulation "success" can be assumed, researchers often include manipulation validity checks in their designs-a measure of the focal construct which should be responsive to the manipulation. One interpretation of a positive manipulation check is that it lends credence to a particular causal interpretation of a phenomenon. Scrutinizing the results of such manipulation checks supposedly enables a more stringent test of a causal hypothesis. This paper submits that manipulation checks do not improve our inferences to causal explanations, but may in practice result in weaker hypothesis tests. Rather than being useful, manipulation checks are at best uninformative, but more likely compromise the appraisal of a causal hypothesis. The second half of this paper advocates four methodological heuristics, offered as alternatives to manipulation validity checks, to more severely test causal hypotheses. The heuristics call for a burgeoning focus on (a) assessing the specificity of manipulations, (b) evaluating theoretical risk, (c) attempts to cast doubt on alternatives, and (d) appraising the relative merits of explanations. I conclude that rather than relying on manipulation checks as a 'Band-Aid' method to alleviate validity concerns, inferential rigor can be improved by virtue of these heuristics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.