The current cloud computing market lacks of clear comparison between the Cloud service providers (CSPs) offerings. This is due to the heterogeneity in the virtual machines (VMs) configurations and their prices which differ among the CSPs. Big players in the market offer different configurations of fixed size VMs. Cloud customers have to choose the CSP that best fits their requirements. In the actual market, and with the limited performance information provided by the CSPs to the cloud users, the choice of the CSP can be a problem for the customers. In our paper, and in the context of the Easi-Clouds (project I, Easi Clouds. http://www.easi-clouds.eu/) a European ITEA 2 research project, we propose a set of performance tests based on real measurements to classify the CSPs based on their performance score as well as their proposed price. We used a set of benchmarks to test the performances of four VMs' sizes (Small (S), Medium (M), Large (L), and Xlarge (XL)) from each one of the biggest eight CSPs (Amazon, Softlayer, Rackspace, Google, Microsoft Azure, Aruba, Digital Ocean, Joyent). We try to compare the performance based on seven different metrics (CPU performance, Memory performance, Disk I/O performance, Mean Response time (MRT), Provisioning time, Availability, and Variability). In a second step, we include the price to have a performance vs. price value figure. In a final step, we propose a new method that let the user specify the importance of each performance's metric as well as the importance of the price to classify the CSPs based on the criterions of the customers. We come up with a unified customer aware figure of merit helping the cloud customers to select the most suitable CSP based on their own requirements.