In Stackelberg security games, a leader locates security resources to protect a set of targets from strategic adversaries that aim to attack these targets after observing the leader's strategy. In this setting, the leader decision problem is to optimize an uncertain reward that can take a discrete set of values with a probability distribution that depends on the decision variable.We show how diverse risk aversion models of the leader decision problem can be formulated as tractable optimization problems, such as imposing: a bound on the expected disutility, chance constraints, bounded distortion risk, first and second order stochastic dominance constraints, or optimizing a value-at-risk and conditional value-at-risk. We detail the resulting optimization problems and present computational results that show how the solution changes in two specific settings: 1) an entropic risk measure or value-at-risk minimization with a quantal response follower and 2) a prospect theory model with optimal follower response.