Transparency is a design principle intended to make the inner workings of autonomous agents visible to end-users such that humans can evaluate the reasoning behind its decisions and actions. To test the effect of agent transparency on situation awareness, mental workload, and task performance, an experiment was performed where 34 nautical navigators were tasked with interpreting the information provided by an autonomous collision and grounding avoidance system. Sixteen traffic situations were created with two levels of complexity. Four levels of transparency varied the amount and type of information in terms of the system’s decisions, planned actions, reasoning, and input parameters. The results show that increased transparency improves SA without increasing mental workload. However, the time to comprehend the system’s decisions and planned actions increased when its reasoning was depicted. Traffic complexity impaired SA, mental workload, and time-to-comprehension regardless of transparency level. However, for level 2 SA, transparency was found to negate the influence of complexity, resulting in improved comprehension of the agent’s reasoning despite high traffic complexity. These outcomes demonstrate the merits of agent transparency as a design principle in supporting human supervision of autonomous agents. However, developers should take care when extending these principles to time-critical applications.