The pivotal problem of comorbidity research lies in the psychometric foundation it rests on, that is, latent variable theory, in which a mental disorder is viewed as a latent variable that causes a constellation of symptoms. From this perspective, comorbidity is a (bi)directional relationship between multiple latent variables. We argue that such a latent variable perspective encounters serious problems in the study of comorbidity, and offer a radically different conceptualization in terms of a network approach, where comorbidity is hypothesized to arise from direct relations between symptoms of multiple disorders. We propose a method to visualize comorbidity networks and, based on an empirical network for major depression and generalized anxiety, we argue that this approach generates realistic hypotheses about pathways to comorbidity, overlapping symptoms, and diagnostic boundaries, that are not naturally accommodated by latent variable models: Some pathways to comorbidity through the symptom space are more likely than others; those pathways generally have the same direction (i.e., from symptoms of one disorder to symptoms of the other); overlapping symptoms play an important role in comorbidity; and boundaries between diagnostic categories are necessarily fuzzy.
The veracity of substantive research claims hinges on the way experimental data are collected and analyzed. In this article, we discuss an uncomfortable fact that threatens the core of psychology's academic enterprise: almost without exception, psychologists do not commit themselves to a method of data analysis before they see the actual data. It then becomes tempting to fine tune the analysis to the data in order to obtain a desired result-a procedure that invalidates the interpretation of the common statistical tests. The extent of the fine tuning varies widely across experiments and experimenters but is almost impossible for reviewers and readers to gauge. To remedy the situation, we propose that researchers preregister their studies and indicate in advance the analyses they intend to conduct. Only these analyses deserve the label "confirmatory," and only for these analyses are the common statistical tests valid. Other analyses can be carried out but these should be labeled "exploratory." We illustrate our proposal with a confirmatory replication attempt of a study on extrasensory perception.
Scores on cognitive tasks used in intelligence tests correlate positively with each other, that is, they display a positive manifold of correlations. The positive manifold is often explained by positing a dominant latent variable, the g factor, associated with a single quantitative cognitive or biological process or capacity. In this article, a new explanation of the positive manifold based on a dynamical model is proposed, in which reciprocal causation or mutualism plays a central role. It is shown that the positive manifold emerges purely by positive beneficial interactions between cognitive processes during development. A single underlying g factor plays no role in the model. The model offers explanations of important findings in intelligence research, such as the hierarchical factor structure of intelligence, the low predictability of intelligence from early childhood performance, the integration/differentiation effect, the increase in heritability of g, and the Jensen effect, and is consistent with current explanations of the Flynn effect.
For a two-choice response time (RT) task, the observed variables are response speed and response accuracy. In experimental psychology, inference usually concerns the mean response time for correct decisions (i.e., MRT) and the proportion of correct decisions (i.e., P c ). The immediate problem is that MRT and P c are in a trade-off relationship: Participants can respond faster, and hence decrease MRT, at the expense of making more errors, thereby decreasing P c (see, e.g., Pachella, 1974;Schouten & Bekker, 1967;Wickelgren, 1977). This so-called speed-accuracy trade-off has for a long time bedeviled the field. Consider 2 participants in an experiment, Amy and Rich. Amy's and Rich's performance is summarized by MRT 0.422 sec, P c .881, and MRT 0.467 sec, P c .953, respectively. Amy responds faster than Rich, but she also commits more errors. Thus, it could be that Amy and Rich have the same ability, but Amy risks making more mistakes. It could also be that Amy's ability is higher than that of Rich, or vice versa. If we only consider MRT and P c , there appears to be no way to tell which of these three possibilities is in fact true. Now consider George, whose performance is characterized by MRT 0.517 sec, P c .953. George responds more slowly than Rich, whereas their error rates are identical. An explanation solely in terms of the speed-accuracy trade-off cannot account for this pattern of results, and therefore most researchers would confidently conclude that Rich performs better than George. Unfortunately, if we only consider MRT and P c , it is impossible to go beyond these conclusions in terms of ordinal relations and quantify how much better Rich does than George. Note that the same arguments would hold if the example above had been in terms of 1 participant who responds in three different experimental conditions presented in three separate blocks of trials. In this case, comparison of performance across the different conditions is complicated by the fact that task performance may be simultaneously influenced by task difficulty and response conservativeness.In sum, both MRT and P c provide valuable information about task difficulty or subject ability, but neither of these variables can be considered in isolation. When MRT and P c are considered simultaneously, however, it is not clear how to weigh their relative contributions to arrive at a single index that quantifies subject ability or task difficulty.A way out of this conundrum is to use cognitive process models to estimate the unobserved variables assumed to underlie performance in the task at hand. The field of research that uses cognitive models for measurement has been termed cognitive psychometrics (Batchelder, 1998;Batchelder & Riefer, 1999;Riefer, Knapp, Batchelder, Bamber, & Manifold, 2002), and similar approaches in other paradigms have included those of Busemeyer and Stout (2002);Stout, Busemeyer, Lin, Grant, and Bonson (2004);and Zaki and Nosofsky (2001). Here, the focus is on the diffusion model for two-choice RT tasks (see, e.g., Ratcliff, 1978...
Does psi exist? D. J. Bem (2011) conducted 9 studies with over 1,000 participants in an attempt to demonstrate that future events retroactively affect people's responses. Here we discuss several limitations of Bem's experiments on psi; in particular, we show that the data analysis was partly exploratory and that one-sided p values may overstate the statistical evidence against the null hypothesis. We reanalyze Bem's data with a default Bayesian t test and show that the evidence for psi is weak to nonexistent. We argue that in order to convince a skeptical audience of a controversial claim, one needs to conduct strictly confirmatory studies and analyze the results with statistical tests that are conservative rather than liberal. We conclude that Bem's p values do not indicate evidence in favor of precognition; instead, they indicate that experimental psychologists need to change the way they conduct their experiments and analyze their data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.