Human-AI teaming refers to systems in which humans and artificial intelligence (AI) agents collaborate to provide significant mission performance improvements over that which humans or AI can achieve alone. The goal is faster and more accurate decision-making by integrating the rapid data ingest, learning, and analyses capabilities of AI with the creative problem solving and abstraction capabilities of humans. The purpose of this panel is to discuss research directions in Trust Engineering for building appropriate bi-directional trust between humans and AI. Discussions focus on the challenges in systems that are increasingly complex and work within imperfect information environments. Panelists provide their perspectives on addressing these challenges through concepts such as dynamic relationship management, adaptive systems, co-discovery learning, and algorithmic transparency. Mission scenarios in command and control (C2), piloting, cybersecurity, and criminal intelligence analysis demonstrate the importance of bi-directional trust in human-AI teams.
The adoption of artificial intelligence (AI) systems in environments that involve high risk and high consequence decision-making is severely hampered by critical design issues. These issues include system transparency and brittleness, where transparency relates to (i) the explainability of results and (ii) the ability of a user to inspect and verify system goals and constraints; and brittleness, (iii) the ability of a system to adapt to new user demands. Transparency is a particular concern for criminal intelligence analysis, where there are significant ethical and trust issues that arise when algorithmic and system processes are not adequately understood by a user. This prevents adoption of potentially useful technologies in policing environments. In this article, we present a novel approach to designing a conversational agent (CA) AI system for intelligence analysis that tackles these issues. We discuss the results and implications of three different studies; a Cognitive Task Analysis to understand analyst thinking when retrieving information in an investigation, Emergent Themes Analysis to understand the explanation needs of different system components, and an interactive experiment with a prototype conversational agent. Our prototype conversational agent, named Pan, demonstrates transparency provision and mitigates brittleness by evolving new CA intentions. We encode interactions with the CA with human factors principles for situation recognition and use interactive visual analytics to support analyst reasoning. Our approach enables complex AI systems, such as Pan, to be used in sensitive environments, and our research has broader application than the use case discussed.
For conversational agents to provide benefit to intelligence analysis they need to be able to recognise and respond to the analysts intentions. Furthermore, they must provide transparency to their algorithms and be able to adapt to new situations and lines of inquiry. We present a preliminary analysis as a first step towards developing conversational agents for intelligence analysis: that of understanding and modeling analyst intentions so they can be recognised by conversational agents. We describe in-depth interviews conducted with experienced intelligence analysts and implications for designing conversational agent intentions.
Criminal investigations are guided by repetitive and time-consuming information retrieval tasks, often with high risk and high consequence. If Artificial intelligence (AI) systems can automate lines of inquiry, it could reduce the burden on analysts and allow them to focus their efforts on analysis. However, there is a critical need for algorithmic transparency to address ethical concerns. In this paper, we use data gathered from Cognitive Task Analysis (CTA) interviews of criminal intelligence analysts and perform a novel analysis method to elicit question networks. We show how these networks form an event tree, where events are consolidated by capturing analyst intentions. The event tree is simplified with a Dynamic Chain Event Graph (DCEG) that provides a foundation for transparent autonomous investigations.
In domains which require high risk and high consequence decision making, such as defence and security, there is a clear requirement for artificial intelligence (AI) systems to be able to explain their reasoning. In this paper we examine what it means to provide explainable AI. We report on research findings to propose that explanations should be tailored, depending upon the role of the human interacting with the system and the individual system components, to reflect different needs. We demonstrate that a ‘one-size-fits-all’ explanation is insufficient to capture the complexity of needs. Thus, designing explainable AI systems involves careful consideration of context, and within that the nature of both the human and AI components.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.