Students learned how to solve binomial probability problems from either a procedurally based lesson or a conceptually based lesson and then worked in distributed pairs by using a computer-based chat environment. Cognitively homogeneous dyads (i.e. both members received the same lesson) performed more accurately on standard problems, whereas cognitively diverse dyads (i.e. each member received a different lesson) performed more accurately on transfer problems. The cognitively homogeneous dyads perceived a greater sense of common ground with their partner, but spent a greater proportion of their time communicating about low-level details (e.g. message verification) whereas the cognitively diverse dyads spent a greater proportion of their time on high-level discussion (e.g. solution development). Results help to clarify that common training leads to more positive perceptions of collaboration, but only improves performance on problems that are highly similar to those experienced during training, whereas diverse training improves the ability of a dyad to perform well in new situations.
Organizational cybersecurity efforts depend largely on the employees who reside within organizational walls. These individuals are central to the effectiveness of organizational actions to protect sensitive assets, and research has shown that they can be detrimental (e.g., sabotage and computer abuse) as well as beneficial (e.g., protective motivated behaviors) to their organizations. One major context where employees affect their organizations is phishing via email systems, which is a common attack vector used by external actors to penetrate organizational networks, steal employee credentials, and create other forms of harm. In analyzing the behavior of more than 6,000 employees at a large university in the Southeast United States during 20 mock phishing campaigns over a 19-month period, this research effort makes several contributions. First, employees’ negative behaviors like clicking links and then entering data are evaluated alongside the positive behaviors of reporting the suspected phishing attempts to the proper organizational representatives. The analysis displays evidence of both repeat clicker and repeat reporter phenomena and their frequency and Pareto distributions across the study time frame. Second, we find that employees can be categorized according to one of the four unique clusters with respect to their behavioral responses to phishing attacks—“Gaffes,” “Beacons,” “Spectators,” and “Gushers.” While each of the clusters exhibits some level of phishing failures and reports, significant variation exists among the employee classifications. Our findings are helpful in driving a new and more holistic stream of research in the realm of all forms of employee responses to phishing attacks, and we provide avenues for such future research.
To better understand employees’ reporting behaviors in relation to phishing emails, we gamified the phishing security awareness training process by creating and conducting a month-long “Phish Derby” competition at a large university in the U.S. The university’s Information Security Office challenged employees to prove they could detect phishing emails as part of the simulated phishing program currently in place. Employees volunteered to compete for prizes during this special event and were instructed to report suspicious emails as potential phishing attacks. Prior to the beginning of the competition, we collected demographics and data related to the concepts central to two theoretical foundations: the Big Five personality traits and goal orientation theory. We found several notable relationships between demographic variables and Phish Derby performance, which was operationalized from the number of phishing attacks reported and employee report speed. Several key findings emerged, including past performance on simulated phishing campaigns positively predicted Phish Derby performance; older participants performed better than their younger colleagues, but more educated participants performed poorer; and individuals who used a mix of PCs and Macs at work performed worse than those using a single platform. We also found that two of the Big Five personality dimensions, extraversion and agreeableness, were both associated with poorer performance in phishing detection and reporting. Likewise, individuals who were driven to perform well in the Phish Derby because they desired to learn from the experience (i.e., learning goal orientation) performed at a lower level than those driven by other goals. Interestingly, self-reported levels of computer skill and the perceived ability to detect phishing messages failed to exhibit a significant relationship with Phish Derby performance. We discuss these findings and describe how focusing on motivating the good in employee cyber behaviors is a necessary yet too often overlooked component in organizations whose training cyber cultures are rooted in employee click rates alone.
AI-generated “deep fakes” are becoming increasingly professional and can be expected to become an essential tool for cybercriminals conducting targeted and tailored social engineering attacks, as well as for others aiming for influencing public opinion in a more general sense. While the technological arms race is resulting in increasingly efficient forensic detection tools, these are unlikely to be in place and applied by common users on an everyday basis any time soon, especially if social engineering attacks are camouflaged as unsuspicious conversations. To date, most cybercriminals do not yet have the necessary resources, competencies or the required raw material featuring the target to produce perfect impersonifications. To raise awareness and efficiently train individuals in recognizing the most widespread deep fakes, the understanding of what may cause individual differences in the ability to recognize them can be central. Previous research suggested a close relationship between political attitudes and top-down perceptual and subsequent cognitive processing styles. In this study, we aimed to investigate the impact of political attitudes and agreement with the political message content on the individual’s deep fake recognition skills.In this study, 163 adults (72 females = 44.2%) judged a series of video clips with politicians’ statements across the political spectrum regarding their authenticity and their agreement with the message that was transported. Half of the presented videos were fabricated via lip-sync technology. In addition to the particular agreement to each statement made, more global political attitudes towards social and economic topics were assessed via the Social and Economic Conservatism Scale (SECS).Data analysis revealed robust negative associations between participants’ general and in particular social conservatism and their ability to recognize fabricated videos. This effect was pronounced where there was a specific agreement with the message content. Deep fakes watched on mobile phones and tablets were considerably less likely to be recognized as such compared to when watched on stationary computers.To the best of our knowledge, this study is the first to investigate and establish the association between political attitudes and interindividual differences in deep fake recognition. The study further supports very recently published research suggesting relationships between conservatism and perceived credibility of conspiracy theories and fake news in general. Implications for further research on psychological mechanisms underlying this effect are discussed.
Humans quickly and effortlessly impose narrative context onto ambiguous stimuli, as demonstrated through psychological projective testing and ambiguous figures. We suggest that this feature of human cognition may be weaponized as part of an information operation. Such Ambiguous Self-Induced Disinformation (ASID) attacks would employ the following elements: the introduction of a culturally consistent narrative, the presence of ambiguous stimuli, the motivation for hypervigilance, and a social network. ASID attacks represent a reduced-risk, low-investment on the part of the adversary with a potentially significant reward, making this a likely tactic of choice for information operators within the context of gray-zone conflicts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.