Advances at the intersection of artificial intelligence (AI) and education and training are occurring at an ever-increasing pace. On the education and training side, psychological and performance constructs play a central role in both theory and application. It is essential, therefore, to accurately determine the dimensionality of a construct, as it is often employed during both the assessment and development of theory, and its practical application. Traditionally, both exploratory and confirmatory factor analyses have been employed to establish the dimensionality of data. Due in part to inconsistent findings, methodologists recently resurrected the bifactor approach for establishing the dimensionality of data. The bifactor model is pitted against traditional data structures, and the one with the best overall fit (according to chi-square, root mean square error of approximation (RMSEA), comparative fit index (CFI), Tucker–Lewis index (TLI), and standardized root mean square residual (SRMR)) is preferred. If the bifactor structure is preferred by that test, it can be further examined via a suite of emerging coefficients (e.g., omega, omega hierarchical, omega subscale, H, explained common variance, and percent uncontaminated correlations), each of which is computed from standardized factor loadings. To examine the utility of these new statistical tools in an education and training context, we analyze data where the construct of interest is trust. We chose trust as it is central, among other things, to understanding human reliance upon and utilization of AI systems. We utilized the above statistical approach and determined the two-factor structure of widely employed trust scale is better represented by one general factor. Findings like this hold substantial implications for theory development and testing, prediction as in structural equation modeling (SEM) models, as well as the utilization of scales and their role in education, training, and AI systems. We encourage other researchers to employ the statistical measures described here to critically examine the construct measures used in their work if those measures are thought to be multidimensional. Only through the appropriate utilization of constructs, defined in part by their dimensionality, are we to advance the intersection of AI and simulation and training.