Markov chains and Markov decision processes have been widely used to model the behavior of computer systems with probabilistic aspects. Numerical and iterative methods are commonly used to analyze these models. Many efforts have been made in recent decades to improve the efficiency of these numerical methods. In this paper, focusing on Markov models with non-sparse structure, a new set of heuristics is proposed for prioritizing model states with the aim of reducing the total computation time. In these heuristics, a set of simulation runs are used for statistical analysis of the effect of each state on the required values of the other states. Under this criterion, the priority of each state in updating its values is determined. The proposed heuristics provide a state ordering that improves the value propagation among the states. The proposed methods are also extended for very large models where disk-based techniques are required to analyze the models. Experimental results show that our proposed methods in this paper reduce the running times of the iterative methods for most cases of non-sparse models.