Cyber-attacks are deliberate attempts by adversaries to illegally access online information of other individuals or organizations. There are likely to be severe monetary consequences for organizations and its workers who face cyber-attacks. However, currently, little is known on how monetary consequences of cyber-attacks may influence the decision-making of defenders and adversaries. In this research, using a cybersecurity game, we evaluate the influence of monetary penalties on decisions made by people performing in the roles of human defenders and adversaries via experimentation and computational modeling. In a laboratory experiment, participants were randomly assigned to the role of "hackers" (adversaries) or "analysts" (defenders) in a laboratory experiment across three between-subject conditions: Equal payoffs (EQP), penalizing defenders for false alarms (PDF) and penalizing defenders for misses (PDM). The PDF and PDM conditions were 10-times costlier for defender participants compared to the EQP condition, which served as a baseline. Results revealed an increase (decrease) and decrease (increase) in attack (defend) actions in the PDF and PDM conditions, respectively. Also, both attack-and-defend decisions deviated from Nash equilibriums. To understand the reasons for our results, we calibrated a model based on Instance-Based Learning Theory (IBLT) theory to the attack-and-defend decisions collected in the experiment. The model's parameters revealed an excessive reliance on recency, frequency, and variability mechanisms by both defenders and adversaries. We discuss the implications of our results to different cyber-attack situations where defenders are penalized for their misses and false-alarms.
Recent research in cybersecurity has begun to develop active defense strategies using game‐theoretic optimization of the allocation of limited defenses combined with deceptive signaling. These algorithms assume rational human behavior. However, human behavior in an online game designed to simulate an insider attack scenario shows that humans, playing the role of attackers, attack far more often than predicted under perfect rationality. We describe an instance‐based learning cognitive model, built in ACT‐R, that accurately predicts human performance and biases in the game. To improve defenses, we propose an adaptive method of signaling that uses the cognitive model to trace an individual's experience in real time. We discuss the results and implications of this adaptive signaling method for personalized defense.
This work is an initial step toward developing a cognitive theory of cyber deception. While widely studied, the psychology of deception has largely focused on physical cues of deception. Given that present-day communication among humans is largely electronic, we focus on the cyber domain where physical cues are unavailable and for which there is less psychological research. To improve cyber defense, researchers have used signaling theory to extended algorithms developed for the optimal allocation of limited defense resources by using deceptive signals to trick the human mind. However, the algorithms are designed to protect against adversaries that make perfectly rational decisions. In behavioral experiments using an abstract cybersecurity game (i.e., Insider Attack Game), we examined human decision-making when paired against the defense algorithm. We developed an instance-based learning (IBL) model of an attacker using the Adaptive Control of Thought-Rational (ACT-R) cognitive architecture to investigate how humans make decisions under deception in cyber-attack scenarios. Our results show that the defense algorithm is more effective at reducing the probability of attack and protecting assets when using deceptive signaling, compared to no signaling, but is less effective than predicted against a perfectly rational adversary. Also, the IBL model replicates human attack decisions accurately. The IBL model shows how human decisions arise from experience, and how memory retrieval dynamics can give rise to cognitive biases, such as confirmation bias. The implications of these findings are discussed in the perspective of informing theories of deception and designing more effective signaling schemes that consider human bounded rationality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.