We present initial structural validity evidence for a serious game designed for personnel selection and classification for cybersecurity roles in the US Air Force (USAF).Based on literature review and input from USAF cybersecurity subject-matterexperts, we targeted six constructs for assessment. We describe the development process used to build a game to assess individual differences in these constructs, while also being engaging and motivating for players. We attend to the challenge of avoiding an overall game performance factor that dominates variance of multiple constructs scored from the same gameplay episodes and report steps taken to enhance discriminant validity of the scores. We apply factor analysis and item response theory models to develop scores that are reliable, show discriminant validity, and show modest education/gender group differences.
Progress of technology and processing power has enabled the advent of sophisticated technology including Artificial Intelligence (AI) agents. AI agents have penetrated society in many forms including conversation agents or chatbots. As these chatbots have a social component to them, is it critical to evaluate the social aspects of their design and its impact on user outcomes. This study employs Social Determination Theory to examine the effect of the three motivational needs on user interaction outcome variables of a decision-making chatbot. Specifically, this study looks at the influence of relatedness, competency, and autonomy on user satisfaction, engagement, decision efficiency, and decision accuracy. A carefully designed experiment revealed that all three needs are important for user satisfaction and engagement while competency and autonomy is associated with decision accuracy. These findings highlight the importance of considering psychological constructs during AI design. Our findings also offer useful implications for AI designers and organizations that plan on using AI assisted chatbots to improve decision-making efforts.
In the latest salvo in the century-long lexical-dimensionality-reduction debate (Galton, 1884), Ashton and Lee (2020) argue their HEXACO model is superior to Big Five models. We argue that debates comparing alternative low-dimensional personality structures no longer advance personality science or practice. Instead, researchers should embrace the inherent complexity and high dimensionality of human individual differences. If a low-dimensional model is used, investigators should choose a model based on its coherent representation of traits they deem meaningful for the research domain, rather than its alignment with a specific factor analysis solution.
Yarkoni highlights patterns of overgeneralization in psychology research. In this comment, we note that such challenges also pertain to applied psychological and organizational research and practice. We use two examples – cross-cultural generalizability and implicit bias training – to illustrate common practices of overgeneralization from narrow research samples to broader operational populations. We conclude with recommendations for research and practice.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.