Recently emerging evidence suggests that the dominant structural model of mental abilities-the (CHC) model-may not adequately account for observed scores for mental abilities batteries, leading scholars to call into question the model's validity. Establishing the robustness of these findings is important since CHC is the foundation for several contemporary mental abilities test batteries, such as the Woodcock-Johnson III (WJ-III). Using confirmatory factor analysis, we investigated CHC's robustness across 4 archival samples of mental abilities test battery data, including the WJ-III, the Kaufman Adolescent& Adult Intelligence Test (KAIT), the Kaufman Assessment Battery for Children (KABC), and the Differential Ability Scales (DAS). We computed omega hierarchical (ωH) and omega subscale (ωS) coefficients for g and the broad factors, which estimated the relationship of composite scores to g and the broad factors, respectively. Across all 4 samples, we found strong evidence for a general ability, g. We additionally found evidence for 3 to 9 residualized, orthogonal broad abilities existing independently of g, many of which also explained reliable variance in test battery scores that cannot be accounted for by g alone. The reliabilities of these broad factors, however, were less than desirable (i.e., <.80) and achieving desirable reliabilities would be practically infeasible (e.g., requiring excessively large numbers of subtests). Our results, and those of CHC critics, are wholly consistent with Carroll's model. Essentially, both g and orthogonal broad abilities are required to explain variance in mental abilities test battery scores, which is consistent with Carroll but not Cattell-Horn. (PsycINFO Database Record
Research has historically adopted one of two views toward learner discretion in training: Learner discretion is controlled by training characteristics (i.e., the instruction-centric view), or learner discretion is a substantive learning process, the understanding of which is essential to developing more effective training (i.e., the learner-centric view). More recently, training scholars have followed the general trend in scientific psychology toward increasingly nuanced and interactionist views of individual behavior. That is, the effectiveness of providing learner discretion is a joint and interactive function of both instruction-centric and learner-centric processes. Taken together, learner-centric, instruction-centric, and interactionist-centric processes propose a constellation of constructs necessary to understand learner control. Here we review the diverse literatures examining learner-control-related constructs and derive specific practical recommendations for designing e-learning. These recommendations follow from the interactionist perspective of learning in modern work environments.
Collective intelligence has been described as a general factor that "explains a group's performance on a wide variety of tasks" (Woolley, Chabris, Pentland, Hashmi, & Malone, 2010, p. 686), much like the general intelligence factor explains individuals' performance on cognitive ability tasks. This construct has received widespread attention in both the media and academic community. In this article we reexamine the data from 6 previously published samples that have been used to examine the existence of the collective intelligence construct and show that the empirical support for the construct is generally weak. Specifically, we show that the general factor explains only little variance in the performance on many group tasks. We also highlight how 2 statistical artifacts-the apparent presence of low effort responding and the nested nature of the data-may also have inflated the little covariation that exists between group performance on different tasks. These findings suggest that there is insufficient support for the existence of a collective intelligence construct. (PsycINFO Database Record
Self‐efficacy is clearly important for learning. Research identifying the most important sources of self‐efficacy beliefs, however, has been somewhat limited to date in that different disciplines focus largely on different sources of self‐efficacy. Whereas education researchers focus on Bandura's original sources of enactive mastery, vicarious experience, verbal persuasion and negative emotional arousal, organizational researchers focus on learning, performance prove and performance avoid achievement goal orientations. To date, however, little research has examined the importance of a specific set of self‐efficacy sources after fully accounting for the influence of other sources. Data were obtained from 278 adult trainees who completed online Microsoft Excel training. Results suggest that once accounting for achievement goal orientation, Bandura's sources of vicarious experience and negative emotional arousal are still important contributors to pretraining self‐efficacy beliefs. As such, organizations seeking to maximize training effectiveness may wish to consider pretraining interventions simultaneously targeting achievement goal orientation, vicarious experience and negative emotional arousal.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.