Research has demonstrated that implicit and explicit evaluations of the same object can diverge. Explanations of such dissociations frequently appeal to dual-process theories, such that implicit evaluations are assumed to reflect object-valence contingencies independent of their perceived validity, whereas explicit evaluations reflect the perceived validity of object-valence contingencies. Although there is evidence supporting these assumptions, it remains unclear if dissociations can arise in situations in which object-valence contingencies are judged to be true or false during the learning of these contingencies. Challenging dual-process accounts that propose a simultaneous operation of two parallel learning mechanisms, results from three experiments showed that the perceived validity of evaluative information about social targets qualified both explicit and implicit evaluations when validity information was available immediately after the encoding of the valence information; however, delaying the presentation of validity information reduced its qualifying impact for implicit, but not explicit, evaluations.
Experimental paradigms designed to assess "implicit" representations are currently very popular in many areas of psychology. The present article addresses the validity of three widespread assumptions in research using these paradigms: that (a) implicit measures reflect unconscious or introspectively inaccessible representations; (b) the major difference between implicit measures and self-reports is that implicit measures are resistant or less susceptible to social desirability; and (c) implicit measures reflect highly stable, older representations that have their roots in long-term socialization experiences. Drawing on a review of the available evidence, we conclude that the validity of all three assumptions is equivocal and that theoretical interpretations should be adjusted accordingly. We discuss an alternative conceptualization that distinguishes between activation and validation processes.
In this methodological commentary, we use Bem's (2011) recent article reporting experimental evidence for psi as a case study for discussing important deficiencies in modal research practice in empirical psychology. We focus on (a) overemphasis on conceptual rather than close replication, (b) insufficient attention to verifying the soundness of measurement and experimental procedures, and (c) flawed implementation of null hypothesis significance testing. We argue that these deficiencies contribute to weak method-relevant beliefs that, in conjunction with overly strong theory-relevant beliefs, lead to a systemic and pernicious bias in the interpretation of data that favors a researcher's theory. Ultimately, this interpretation bias increases the risk of drawing incorrect conclusions about human psychology. Our analysis points to concrete recommendations for improving research practice in empirical psychology. We recommend (a) a stronger emphasis on close replication, (b) routinely verifying the integrity of measurement instruments and experimental procedures, and (c) using stronger, more diagnostic forms of null hypothesis testing.
Cognitive complexity was measured in terms of dimensionality and articula-tion. How consistent they were between different measuring conditions was examined by correlating their measures with one another obtained from two sets of grids differing in constructs, objects (role persons), and tasks (rating vs. grouping). Measures of dimensionality were the modified Bieri's matching score, Scott's D, and Ware's percent of variance of the first principal component, and those of articulation, Bieri's matching score, Scott's C, and the number of groups. The main findings were as follows. (1) Dimensionality varied quite largely between two conditions differing in elements of grids, while articulation kept some coherence. (2) According to the results of split-half method, alternation of objects in a grid contributed more to fluctuation of dimensionality than of constructs.
There is currently an unprecedented level of doubt regarding the reliability of research findings in psychology. Many recommendations have been made to improve the current situation. In this article, we report results from PsychDisclosure.org, a novel open-science initiative that provides a platform for authors of recently published articles to disclose four methodological design specification details that are not required to be disclosed under current reporting standards but that are critical for accurate interpretation and evaluation of reported findings. Grassroots sentiment-as manifested in the positive and appreciative response to our initiative-indicates that psychologists want to see changes made at the systemic level regarding disclosure of such methodological details. Almost 50% of contacted researchers disclosed the requested design specifications for the four methodological categories (excluded subjects, nonreported conditions and measures, and sample size determination). Disclosed information provided by participating authors also revealed several instances of questionable editorial practices, which need to be thoroughly examined and redressed. On the basis of these results, we argue that the time is now for mandatory methods disclosure statements for all psychology journals, which would be an important step forward in improving the reliability of findings in psychology.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.