Cyber defense requires decision making under uncertainty, yet this critical area has not been a focus of research in judgment and decision-making. Future defense systems, which will rely on software-defined networks and may employ “moving target” defenses, will increasingly automate lower level detection and analysis, but will still require humans in the loop for higher level judgment. We studied the decision making process and outcomes of 17 experienced network defense professionals who worked through a set of realistic network defense scenarios. We manipulated gain versus loss framing in a cyber defense scenario, and found significant effects in one of two focal problems. Defenders that began with a network already in quarantine (gain framing) used a quarantine system more, as measured by cost, than those that did not (loss framing). We also found some difference in perceived workload and efficacy. Alternate explanations of these findings and implications for network defense are discussed.
Open plan offices are both popular and controversial. We studied the response of a group moving from shared, but closed offices to an open plan office. The main data source reported here is a workplace satisfaction survey given pre-move, post-move, and to a lab baseline comparison group at the same organization, with some additional data from observations and interviews. Workers moving to the open plan office appreciated the flexible support for collaboration and the space’s appearance. There was lower satisfaction related to space for private concentrated work, temperature control, and ability to have private conversations. There were also some statistical interactions suggesting more positive responses by males and less positive responses by introverts; analysis was limited by small sample size. Observations and interviews gave further insight into open plan “neighborhoods” and the design of ad hoc spaces.
As we enter an age where the behavior and capabilities of artificial intelligence and autonomous system technologies become ever more sophisticated, cooperation, collaboration, and teaming between people and these machines is rising to the forefront of critical research areas. People engage socially with almost everything with which they interact. However, unlike animals, machines do not share the experiential aspects of sociality. Experiential robotics identifies the need to develop machines that not only learn from their own experience, but can learn from the experience of people in interactions, wherein these experiences are primarily social. In this paper, we argue, therefore, for the need to place experiential considerations in interaction, cooperation, and teaming as the basis of the design and engineering of person-machine teams. We first explore the importance of semantics in driving engineering approaches to robot development. Then, we examine differences in the usage of relevant terms like trust and ethics between engineering and social science approaches to lay out implications for the development of autonomous, experiential systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.