Divergent thinking tasks are the cornerstone of creative thinking assessment. Besides fluency, the number of generated ideas, several other scores have been used to measure different aspects of idea generation in divergent thinking tasks. However, between all such scores high correlations are quite common. These correlations, in particular high correlations of any score with fluency, were interpreted as evidence for unidimensionality of divergent thinking, or as evidence for equal odds. On the other hand, it has been argued that common scores do not properly adjust for fluency. Moreover, it has been assumed that high correlations are artifactual, that is, caused by same task method-bias. In this article, the confounding of additive scorings by fluency is quantitavely and theoretically analyzed. We show that the raw correlations between fluency and qualitiy alone cannot distinguish between different concurring theories about idea generation. We propose a formal definition of purely artifactual correlation that is oriented toward the generation process and allows to test for these conflicting theories. The performance of the test is carefully evaluated by a thorough simulation study and its application exemplified by a reevaluation of past results. We conclude with recommendations for the design and analysis of future studies.
In the presented work, a shift of perspective with respect to the dimensionality of divergent thinking (DT) tasks is introduced moving from the question of multidimensionality across DT scores (i.e., fluency, flexibility, or originality) to the question of multidimensionality within one holistic score of DT performance (i.e., snapshot ratings of creative quality). We apply IRTree models to test whether unidimensionality assumptions hold in different task instructions for snapshot scoring of DT tests across Likert-scale points and varying levels of fluency. It was found that evidence for unidimensionality across scale points was stronger with be-creative instructions as compared to be-fluent instructions which suggests better psychometric quality of ratings when be-creative instructions are used. In addition, creative quality latent variables pertaining to low-fluency and high-fluency ideational pools shared around 50% of variance which suggests both strong overlap, and evidence for differentiation. The presented approach allows to further examine the psychometric quality of subjective ratings and to examine new questions with respect to within-item multidimensionality in DT.
Simonton's equal odds baseline assumes that the number of creative hits is a positive linear function of the number of attempts (i.e., products). It has importance for productivity of innovators and scientists, small group brainstorming, and divergent thinking research. It has been proposed within a stochastic model for productions in the field of scientific creativity (e.g., publications of scientists). Tests of the equal odds baseline rely commonly on tests of the correlation between quantity and additive quality of output, which has been demonstrated to be inconclusive by Forthmann, Szardenings, and Holling (2018). In contrast, the current work uses a very strict version of equal odds (i.e., assuming constant hit ratios) as a starting point to examine the model at its roots. However, a deviation from such a stricter variant of the equal odds baseline is not only a dichotomous decision. The current work introduces meta-analytical techniques that provide useful statistics to quantify the amount of hit ratio variation attributable to individual differences in the equal odds baseline. This approach further allows to take varying levels of reliability in hit ratio as a function of total output into account. This is illustrated for cross-sectional and longitudinal equal odds with data sets from the fields of productivity of innovators and scientists, small group brainstorming, and divergent thinking research.
Psychological tests are usually analysed with item response models. Recently, some alternative measurement models have been proposed that were derived from cognitive process models developed in experimental psychology. These models consider the responses but also the response times of the test takers. Two such models are the Q-diffusion model and the D-diffusion model. Both models can be calibrated with the diffIRT package of the R statistical environment via marginal maximum likelihood (MML) estimation. In this manuscript, an alternative approach to model calibration is proposed. The approach is based on weighted least squares estimation and parallels the standard estimation approach in structural equation modelling. Estimates are determined by minimizing the discrepancy between the observed and the implied covariance matrix. The estimator is simple to implement, consistent, and asymptotically normally distributed. Least squares estimation also provides a test of model fit by comparing the observed and implied covariance matrix. The estimator and the test of model fit are evaluated in a simulation study. Although parameter recovery is good, the estimator is less efficient than the MML estimator.
In this study, we focus on mental speed and divergent thinking, examining their relationship and the influence of task speededness. Participants (N = 109) completed a set of processing speed tasks and a test battery measuring divergent thinking. We used two speeded divergent‐thinking tasks of 2 minutes and two unspeeded tasks of 8 minutes to test the influence of task speededness on creative quality and their relation to mental speed. Before each task, participants were instructed to be creative in order to optimally measure creative quality. We found a large main effect of task speededness: less creative ideas were generated when tasks were speeded as compared to unspeeded (Cohen's d = −1.64). We could also replicate a positive relationship of mental speed with speeded divergent thinking (r = .21) and mental speed with unspeeded divergent thinking (r = .25). Our hypothesis that the relation is higher for the speeded divergent‐thinking tasks was not confirmed. Importantly, variation in creative quality scores under speeded conditions was not explained by mental speed beyond the predictive power of unspeeded creative quality. The latter finding implies that measurement of creative quality under speeded conditions is not confounded by mental speed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.