Advancements in Artificial Intelligence (AI) will produce “reasonable disagreements” between human operators and machine partners. A simulation study investigated factors that may influence compromise between human and robot partners when they disagree in situation evaluation. Eighty-seven participants viewed urban scenes and interacted with a robot partner to make a threat assessment. We explored the impacts of multiple factors on threat ratings and trust, including how the robot communicated with the person, and whether or not the robot compromised following dialogue. Results showed that participants were open to compromise with the robot, especially when the robot detected threat in a seemingly safe scene. Unexpectedly, dialogue with the robot and hearing robot inner speech reduced compromise and trust, relative to control conditions providing transparency or signaling benevolence. Dialogue may change the human’s perception of the robot’s role in the team, indicating a design challenge for design of future systems.