There are many applications where artificial intelligence (AI) can add a benefit, but this benefit may not be fully realized, if the human cannot understand and interact with the output as required by their context. Allowing AI to explain its decisions can potentially mitigate this issue. To develop effective explainable AI methods to support this need, we need to understand both what the human needs for decision-making, as well as what information the AI has and can make available. This paper presents an example case of capturing those requirements. We explore how an operational planner (senior human analyst) for a cyber protection team could use a junior analyst virtual agent to scour, analyze, and present the data available on vulnerabilities and incidents on both the target systems as well as similar systems. We explore the interactions required to understand these outputs and to integrate additional knowledge held by the human. This is an exemplar case for integrating XAI into the real-world bi-directional workflow: the senior analyst needs to be able to understand the junior analysts results, particularly the assumptions and implications, in order to create a plan and brief it up the command chain. He or she may have further questions, or analysis needs to achieve this understanding. The application is the junior analyst agent and senior human analysts working together to create this understanding of threats, vulnerabilities, incidents, likely future attacks, and counteractions on the mission relevant cyber terrain that their unit has been assigned a mission on.