The energy management schemes in multi-server data centers with setup time mostly consider thresholds on the number of idle servers or waiting jobs to switch servers on or off. An optimal energy management policy can be characterized as a Markov decision process (MDP) at large, given that the system parameters evolve Markovian. The resulting optimal reward can be defined as the weighted sum of mean power usage and mean delay of requested jobs. For large-scale data centers however, these models become intractable due to the colossal state-action space, thus making conventional algorithms inefficient in finding the optimal policy. In this paper, we propose an approximate semi-MDP (SMDP) approach, known as 'multi-level SMDP', based on state aggregation and Markovian analysis of the system behavior. Rather than averaging the transition probabilities of aggregated states as in typical methods, we introduce an approximate Markovian framework for calculating the transition probabilities of the proposed multi-level SMDP accurately. Moreover, near-optimal performance can be attained at the expense of increased state-space dimensionality by tuning the number of levels in the multi-level approach. Simulation results show that the proposed approach reduces the SMDP size while yielding better rewards as against existing fixed threshold-based policies and aggregation methods.