Controlled natural language (CNL) has great potential to support human-machine interaction (HMI) because it provides an information representation that is both human readable and machine processable. We investigated the effectiveness of a CNL-based conversational interface for HMI in a behavioral experiment called simple human experiment regarding locally observed collective knowledge (SHERLOCK). In SHERLOCK, individuals acted in groups to discover and report information to the machine using natural language (NL), which the machine then processed into CNL. The machine fused responses from different users to form a common operating picture, a dashboard showing the level of agreement for distinct information. To obtain information to add to this dashboard, users explored the real world in a simulated crowdsourced sensing scenario. This scenario represented a simplified controlled analog for tactical intelligence (i.e., direct intelligence of the environment), which is key for rapidly planning military, law enforcement, and emergency operations. Overall, despite close to zero training, 74% of the users inputted NL that was machine interpretable and addressed the assigned tasks. An experimental manipulation aimed to increase user-machine interaction, however, did not improve performance as hypothesized. Nevertheless, results indicate that the conversational interface may be effective in assisting humans with collection and fusion of information in a crowdsourcing context. Index Terms-Controlled natural language (CNL), conversational interface, human-computer collaboration (HCC), humanmachine interaction (HMI), tactical intelligence.
Much work has been dedicated to the exploration of Multi-Agent Reinforcement Learning (MARL) paradigms implementing a centralized learning with decentralized execution (CLDE) approach to achieve human-like collab-oration in cooperative tasks. Here, we discuss variations of centralized training and describe a recent survey of algorithmic approaches. The goal is to explore how different implementations of information sharing mechanism in centralized learning may give rise to distinct group coordinated behaviors in multi-agent systems performing cooperative tasks.
Cyber attacks endanger physical, economic, social, and political security. There have been extensive efforts in government, academia, and industry to anticipate, forecast, and mitigate such cyber attacks. A common approach is time-series forecasting of cyber attacks based on data from network telescopes, honeypots, and automated intrusion detection/prevention systems. This research has uncovered key insights such as systematicity in cyber attacks. Here, we propose an alternate perspective of this problem by performing forecasting of attacks that are analyst-detected and -verified occurrences of malware. We call these instances of malware cyber event data. Specifically, our dataset was analyst-detected incidents from a large operational Computer Security Service Provider (CSSP) for the U.S. Department of Defense, which rarely relies only on automated systems. Our data set consists of weekly counts of cyber events over approximately seven years. This curated dataset has characteristics that distinguish it from most datasets used in prior research on cyber attacks. Since all cyber events were validated by analysts, our dataset is unlikely to have false positives which are often endemic in other sources of data. Further, the higher-quality data could be used for a number of important tasks for CSSPs such as resource allocation, estimation of security resources, and the development of effective risk-management strategies. To quantify bursts, we used a Markov model of state transitions. For forecasting, we used a Bayesian State Space Model and found that events one week ahead could be predicted with reasonable accuracy, with the exception of bursts. Our findings of systematicity in analyst-detected cyber attacks are consistent with previous work using cyber attack data from other sources. The advanced information provided by a forecast may help with threat awareness by providing a probable value and range for future cyber events one week ahead, similar to a weather forecast. Other potential applications for cyber event forecasting include proactive allocation of resources and capabilities for cyber defense (e.g., analyst staffing and sensor configuration) in CSSPs. Enhanced threat awareness may improve cybersecurity by helping to optimize human and technical capabilities for cyber defense.In contrast, nearly all prior research on modeling cyber attacks [4-10] lacked analyst detection and verification of computer security incidents with the exceptions of [11,12]. In these two exceptions, security incidents were verified by system administrators at a large university [11] or verified by analysts at a CSSP [12]. Thus in most earlier research, the sources for cyber attacks were processed data from network telescopes and honeypots [4,6,[8][9][10] and alerts from automated systems on real networks [5,7,13]. Compared to real networks, the majority of traffic to network telescopes (passive monitoring of unrequested network traffic to unused IP addresses) and honeypots (monitored and isolated systems that are designed to appear...
SA is a widely used cognitive construct in human factors, summarized as “knowing what is going on.” Generally, SA is theoretically posited to be a critical causal factor and/or construct for performance. However, some researchers have raised concerns that SA may be circular and also that SA may lack the appropriate psychological mechanisms relevant to performance. We address these conflicting perspectives using meta-analysis to evaluate the specific and general patterns of associations among SA-performance effect sizes. Specifically, we focus on the validity of SA for performance—the degree to which SA represents or captures the relevant psychological processes and mechanisms related to task performance. From the empirical literature, we coded associations of eight unique measures of SA with (task) performance: 492 effects from 38 papers met the systematic review inclusion criteria. In contrast to SA’s broadly theorized fundamental link with performance, the magnitude of most meta-analytic mean effect sizes for SA measures was limited to medium or lower effects. Although there was a significant overall mean effect, its magnitude was also limited (r = 0.24). In addition, there was high unexplained systematic variation with an enormous plausible range for individual effects (r = -0.20 to 0.60). The meta-analytic results are inconsistent with theories postulating SA is fundamental to performance. Instead, SA’s validity for performance tends to be, on average, weak with large variations among effects. Therefore, theories may need to be revised. Furthermore, even presuming SA is causally linked to performance as generally theorized, improvements in SA (such as SA-based design and training) may not correspond to meaningful increases in task performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.