INTRODUCTIONPerformance measurement tools are very important, both for designers and users of database systems, whether they are aimed at On-Line Transaction Processing (OLTP) or On-Line Analysis Processing (OLAP). Performance evaluation is useful to designers to determine elements of architecture, and more generally to validate or refute hypotheses regarding the actual behavior of a system. Thus, performance evaluation is an essential component in the development process of well-designed and scalable systems, which is nowadays of primary importance in the context of cloud computing. Moreover, users may also employ performance evaluation, either to compare the efficiency of different technologies before selecting a software solution or to tune a system. Performance evaluation by experimentation on a real system is generally referred to as benchmarking. It consists in performing a series of tests on a given system to estimate its performance in a given setting. Typically, a database benchmark is constituted of two main elements: a data model (conceptual schema and extension) and a workload model (set of read and write operations) to apply on this dataset, with respect to a predefined protocol. Most benchmarks also include a set of simple or composite performance metrics such as response time, throughput, number of input/output, disk or memory usage, etc.The aim of this article is to present an overview of the major families of state-of-the-art data processing benchmarks, namely transaction processing benchmarks and decision support benchmarks. We also address the newer trends in cloud benchmarking. Finally, we discuss the issues, tradeoffs and future trends for data processing benchmarks.