The cost‐performance tradeoff is a fundamental issue in a data center for cloud computing, which is closely related to two key metrics that both cloud consumers and service providers care the most, that is, quality of service and cost of service. While there are different definitions of quality of service, the average response time is a common choice of performance metric. While there are various considerations in cost of service, the average power consumption is a common choice of cost metric. Hence, the cost‐performance tradeoff becomes the power‐performance tradeoff. In this article, we deal with the power‐performance tradeoff at the data center level. We study cost‐performance ratio optimization by using the techniques of workload management and server speed setting. In particular, we make the following tangible contributions. We solve three optimization problems, that is, (1) the workload management problem—to find a workload distribution, such that the cost‐performance ratio is minimized; (2) the server speed setting problem—to find a server speed setting, such that the cost‐performance ratio is minimized; (3) the workload management and server speed setting problem—to find a workload distribution and a server speed setting, such that the cost‐performance ratio is minimized. All the three optimization problems are analytically defined as multivariable optimization problems based on M/M/m queueing systems for multiple heterogeneous multiserver systems, together with two power consumption models, that is, the idle‐speed model and the constant‐speed model. Our approach makes it possible to quantitatively evaluate and optimize the cost‐performance ratio of a data center within a rigorously developed framework. Each multivariable optimization problem is transformed to a nonlinear system of equations. Due to the sophistication of these equations, they are solved algorithmically by a numerical procedure. Furthermore, we provide approximate, accurate, and analytical solutions to the first two problems. Performance data are demonstrated for each problem, and the accuracy of our approximate solutions are also discussed. To the best of the author's knowledge, this is the first paper which analytically and algorithmically minimizes the cost‐performance ratio of a data center with multiple heterogeneous multiserver systems using the techniques of workload management and server speed setting.