Situational risk has been postulated to be one of the most important contextual factors affecting operator’s trust in automation. However, experimentally, it has received only little attention and was directly manipulated even less. To close this gap, this study used a virtual reality multi-task environment where the main task entailed making a diagnosis by assessing different parameters. Risk was manipulated via the altitude, the task was set in including the possibility of virtually falling in case of a mistake. Participants were aided either by information or decision automation. Results revealed that trust attitude toward the automation was not affected by risk. While trust attitude was initially lower for the decision automation, it was equally high in both groups at the end of the experiment after experiencing reliable support. Trust behavior was significantly higher and increased during the experiment for the decision automation supported group in the form of less automation verification behavior. However, this detrimental effect was distinctly attenuated under high risk. This implies that negative consequences of decision automation in the real world might have been overestimated by studies not incorporating risk.