Studies have shown that Cloud services evaluation would be crucial and beneficial for both service customers and providers, and metrics would play a vital role in any evaluation implementation. Considering the numerous and various aspects of Cloud services, a frequent suggestion is to perform evaluation from a holistic view. The currently normal strategy of holistic evaluation is to use a set of metrics along with a suite of benchmarks to conduct separated experiments. Given the separated, diverse, and even possibly conflicting measurement criteria, it could be still hard for customers with such evaluation reports to understand an evaluated Cloud service from a global perspective. Inspired by the boosting approaches to machine learning, we proposed the concept Boosting Metrics to represent all the potential approaches that are able to deliver summary measurement of Cloud services. Essentially, the idea of boosting metrics is to holistically measure Cloud services with concern of service properties, which supplements the strategy of employing benchmark suites that is to holistically evaluate Cloud services with concern of different workloads. This paper introduces two types of preliminary approaches, and unifies a set of sophisticated measurements into the notion of boosting metrics. In particular, we show that boosting metrics can be used as a summary Response for applying experimental design to Cloud services evaluation. Although the concept Boosting Metrics was refined based on our work in the Cloud Computing domain, we believe it can be easily adapted to the evaluation work of other computing paradigms.