We introduce the Lyapunov approach to optimal control problems of average risk-sensitive Markov control processes with general risk maps. Motivated by applications in particular to behavioral economics, we consider possibly nonconvex risk maps, modeling behavior with mixed risk preference. We introduce classical objective functions to the risk-sensitive setting and we are in particular interested in optimizing the average risk in the infinite-time horizon for Markov Control Processes on general, possibly non-compact, state spaces allowing also unbounded cost. Existence and uniqueness of an optimal control is obtained with a fixed point theorem applied to the nonlinear map modeling the risksensitive expected total cost. The necessary contraction is obtained in a suitable chosen seminorm under a new set of conditions: 1) Lyapunov-type conditions on both risk maps and cost functions that control the growth of iterations, and 2) Doeblin-type conditions, known for Markov chains, generalized to nonlinear mappings. In the particular case of the entropic risk map, the above conditions can be replaced by the existence of a Lyapunov function, a local Doeblin-type condition for the underlying Markov chain, and a growth condition on the cost functions.