Query optimizers have long been considered as among the most complex components of a database engine, while the assessment of an optimizer's quality remains a challenging task. Indeed, existing performance benchmarks for database engines (like TPC benchmarks) produce a performance assessment of the query runtime system rather than its query optimizer. To address this challenge, this paper introduces OptMark, a toolkit for evaluating the quality of a query optimizer. OptMark is designed to offer a number of desirable properties. First, it decouples the quality of an optimizer from the quality of its underlying execution engine. Second it evaluates independently both the effectiveness of an optimizer (i.e., quality of the chosen plans) and its efficiency (i.e., optimization time). OptMark includes also a generic benchmarking toolkit that is minimum invasive to the DBMS that wishes to use it. Any DBMS can provide a system-specific implementation of a simple API that allows OptMark to run and generate benchmark scores for the specific system. This paper discusses the metrics we propose for evaluating an optimizer's quality, the benchmark's design and the toolkit's API and functionality. We have implemented Opt-Mark on the open-source MySQL engine as well as two commercial database systems. Using these implementations we are able to assess the quality of the optimizers on these three systems based on the TPC-DS benchmark queries.