In recent years, due to the growing demand for computational resources, particularly in cloud computing systems, the data centers’ energy consumption is continually increasing, which directly causes price rise and reductions of resources’ productivity. Although many energy-aware approaches attempt to minimize the consumption of energy, they cannot minimize the violation of service-level agreements at the same time. In this paper, we propose a method using a granular neural network, which is used to model data processing. This method identifies the physical hosts’ workloads before the overflow and can improve energy consumption while also reducing violation of service-level agreements. Unlike the other techniques that use a single criterion, namely, worked on the basis of the history of using the processor, we simultaneously use all the productivity rates criteria, that is, processor productivity rates, main memory, and bandwidth. Extensive real-world simulations using the CloudSim simulator show the high efficiency of the proposed algorithm.